Test Report: QEMU_macOS 19377

                    
                      81fa2899e75fb9e546311166288b8d27068854ba:2024-08-05:35656
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.07
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.95
36 TestAddons/Setup 10.53
37 TestCertOptions 10.12
38 TestCertExpiration 195.17
39 TestDockerFlags 10.28
40 TestForceSystemdFlag 10.21
41 TestForceSystemdEnv 10.29
47 TestErrorSpam/setup 9.88
56 TestFunctional/serial/StartWithProxy 10.08
58 TestFunctional/serial/SoftStart 5.25
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
70 TestFunctional/serial/MinikubeKubectlCmd 0.73
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.86
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.12
86 TestFunctional/parallel/ServiceCmdConnect 0.13
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.11
91 TestFunctional/parallel/CpCmd 0.26
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
111 TestFunctional/parallel/DockerEnv/bash 0.04
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.04
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.05
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 112.46
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 38.31
150 TestMultiControlPlane/serial/StartCluster 9.79
151 TestMultiControlPlane/serial/DeployApp 73.04
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 56.05
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.25
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
164 TestMultiControlPlane/serial/StopCluster 3.55
165 TestMultiControlPlane/serial/RestartCluster 5.26
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 9.87
174 TestJSONOutput/start/Command 9.77
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.18
206 TestMountStart/serial/StartWithMountFirst 10.06
209 TestMultiNode/serial/FreshStart2Nodes 9.81
210 TestMultiNode/serial/DeployApp2Nodes 110.12
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 52.09
218 TestMultiNode/serial/RestartKeepsNodes 7.44
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 3.38
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.53
226 TestPreload 10.03
228 TestScheduledStopUnix 10
229 TestSkaffold 12.75
232 TestRunningBinaryUpgrade 595.21
234 TestKubernetesUpgrade 18.74
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.45
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.22
250 TestStoppedBinaryUpgrade/Upgrade 585.85
252 TestPause/serial/Start 9.85
262 TestNoKubernetes/serial/StartWithK8s 9.77
263 TestNoKubernetes/serial/StartWithStopK8s 5.26
264 TestNoKubernetes/serial/Start 5.32
268 TestNoKubernetes/serial/StartNoArgs 5.3
270 TestNetworkPlugins/group/auto/Start 10.13
271 TestNetworkPlugins/group/kindnet/Start 9.68
272 TestNetworkPlugins/group/flannel/Start 9.78
273 TestNetworkPlugins/group/enable-default-cni/Start 9.85
274 TestNetworkPlugins/group/bridge/Start 10.11
275 TestNetworkPlugins/group/kubenet/Start 9.73
276 TestNetworkPlugins/group/custom-flannel/Start 9.89
277 TestNetworkPlugins/group/calico/Start 9.85
278 TestNetworkPlugins/group/false/Start 9.85
280 TestStartStop/group/old-k8s-version/serial/FirstStart 9.85
281 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
282 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 9.83
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/no-preload/serial/SecondStart 5.23
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 11.9
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.07
306 TestStartStop/group/embed-certs/serial/DeployApp 0.09
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
310 TestStartStop/group/embed-certs/serial/SecondStart 5.26
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
314 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
316 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
317 TestStartStop/group/embed-certs/serial/Pause 0.1
319 TestStartStop/group/newest-cni/serial/FirstStart 10.01
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.57
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (15.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-095000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-095000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (15.063984042s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"215d226e-7425-477d-9f9a-137b09a7c83c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-095000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"02de12ca-b22e-4de6-a19a-0c279ab399d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19377"}}
	{"specversion":"1.0","id":"00b7cf2a-f253-411c-81a0-728b3134ec31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig"}}
	{"specversion":"1.0","id":"3ab2a852-0c2d-4843-8bd7-af20d4906562","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5f27e7ba-8be6-43f5-b488-1dfb9fdfe106","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"799fb37a-609e-42de-91f2-e84e1b3c2469","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube"}}
	{"specversion":"1.0","id":"eaa2f8a0-1fe2-4d91-b0fe-2874d145b99e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"6ff34562-1697-42ed-a139-68a8fa176415","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e494973e-dfc5-4391-8ca9-757baeaf279e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4fd9171a-c37c-485b-bcb5-3a666f404344","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"936ff2bf-82c3-48ba-b0b7-ea8a2a7f8c5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-095000\" primary control-plane node in \"download-only-095000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"033c2ac3-3d92-4e7d-9185-d69f0ed56585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"29175300-2606-4965-868a-b065c797e8d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80] Decompressors:map[bz2:0x140008009b0 gz:0x140008009b8 tar:0x14000800930 tar.bz2:0x14000800960 tar.gz:0x14000800980 tar.xz:0x14000800990 tar.zst:0x140008009a0 tbz2:0x14000800960 tgz:0x14
000800980 txz:0x14000800990 tzst:0x140008009a0 xz:0x140008009c0 zip:0x140008009d0 zst:0x140008009c8] Getters:map[file:0x14000701490 http:0x140006a0280 https:0x140006a02d0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"5d435eea-d2aa-4df5-b6d7-b3acafebca4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:22:07.439819    7626 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:22:07.439959    7626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:22:07.439962    7626 out.go:304] Setting ErrFile to fd 2...
	I0805 04:22:07.439964    7626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:22:07.440084    7626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	W0805 04:22:07.440169    7626 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19377-7130/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19377-7130/.minikube/config/config.json: no such file or directory
	I0805 04:22:07.441380    7626 out.go:298] Setting JSON to true
	I0805 04:22:07.457642    7626 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4897,"bootTime":1722852030,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:22:07.457717    7626 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:22:07.462632    7626 out.go:97] [download-only-095000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:22:07.462785    7626 notify.go:220] Checking for updates...
	W0805 04:22:07.462881    7626 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 04:22:07.466318    7626 out.go:169] MINIKUBE_LOCATION=19377
	I0805 04:22:07.469324    7626 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:22:07.474314    7626 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:22:07.477233    7626 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:22:07.480338    7626 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	W0805 04:22:07.486180    7626 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 04:22:07.486374    7626 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:22:07.489209    7626 out.go:97] Using the qemu2 driver based on user configuration
	I0805 04:22:07.489228    7626 start.go:297] selected driver: qemu2
	I0805 04:22:07.489231    7626 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:22:07.489301    7626 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:22:07.492342    7626 out.go:169] Automatically selected the socket_vmnet network
	I0805 04:22:07.497714    7626 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 04:22:07.497796    7626 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 04:22:07.497842    7626 cni.go:84] Creating CNI manager for ""
	I0805 04:22:07.497858    7626 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 04:22:07.497910    7626 start.go:340] cluster config:
	{Name:download-only-095000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-095000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:22:07.501816    7626 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:22:07.506287    7626 out.go:97] Downloading VM boot image ...
	I0805 04:22:07.506302    7626 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0805 04:22:15.654351    7626 out.go:97] Starting "download-only-095000" primary control-plane node in "download-only-095000" cluster
	I0805 04:22:15.654378    7626 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 04:22:15.711237    7626 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 04:22:15.711243    7626 cache.go:56] Caching tarball of preloaded images
	I0805 04:22:15.711398    7626 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 04:22:15.716469    7626 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 04:22:15.716476    7626 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 04:22:15.792425    7626 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 04:22:21.354204    7626 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 04:22:21.354359    7626 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 04:22:22.048868    7626 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 04:22:22.049069    7626 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/download-only-095000/config.json ...
	I0805 04:22:22.049087    7626 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/download-only-095000/config.json: {Name:mke8a6efef77f0e2f34a481607e36c77e7e08333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:22:22.049328    7626 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 04:22:22.049532    7626 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0805 04:22:22.426872    7626 out.go:169] 
	W0805 04:22:22.431973    7626 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80] Decompressors:map[bz2:0x140008009b0 gz:0x140008009b8 tar:0x14000800930 tar.bz2:0x14000800960 tar.gz:0x14000800980 tar.xz:0x14000800990 tar.zst:0x140008009a0 tbz2:0x14000800960 tgz:0x14000800980 txz:0x14000800990 tzst:0x140008009a0 xz:0x140008009c0 zip:0x140008009d0 zst:0x140008009c8] Getters:map[file:0x14000701490 http:0x140006a0280 https:0x140006a02d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0805 04:22:22.431996    7626 out_reason.go:110] 
	W0805 04:22:22.439947    7626 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:22:22.443854    7626 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-095000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (15.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-114000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-114000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.803000667s)

                                                
                                                
-- stdout --
	* [offline-docker-114000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-114000" primary control-plane node in "offline-docker-114000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-114000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:33:56.339499    9434 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:33:56.339621    9434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:33:56.339624    9434 out.go:304] Setting ErrFile to fd 2...
	I0805 04:33:56.339626    9434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:33:56.339771    9434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:33:56.340963    9434 out.go:298] Setting JSON to false
	I0805 04:33:56.358961    9434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5606,"bootTime":1722852030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:33:56.359084    9434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:33:56.364034    9434 out.go:177] * [offline-docker-114000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:33:56.368050    9434 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:33:56.368053    9434 notify.go:220] Checking for updates...
	I0805 04:33:56.375876    9434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:33:56.379060    9434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:33:56.382024    9434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:33:56.385071    9434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:33:56.388075    9434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:33:56.391391    9434 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:33:56.391451    9434 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:33:56.395048    9434 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:33:56.402053    9434 start.go:297] selected driver: qemu2
	I0805 04:33:56.402063    9434 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:33:56.402072    9434 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:33:56.404202    9434 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:33:56.407052    9434 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:33:56.410110    9434 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:33:56.410148    9434 cni.go:84] Creating CNI manager for ""
	I0805 04:33:56.410155    9434 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:33:56.410162    9434 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:33:56.410190    9434 start.go:340] cluster config:
	{Name:offline-docker-114000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-114000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:33:56.414010    9434 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:33:56.420874    9434 out.go:177] * Starting "offline-docker-114000" primary control-plane node in "offline-docker-114000" cluster
	I0805 04:33:56.424964    9434 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:33:56.425007    9434 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:33:56.425015    9434 cache.go:56] Caching tarball of preloaded images
	I0805 04:33:56.425101    9434 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:33:56.425107    9434 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:33:56.425166    9434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/offline-docker-114000/config.json ...
	I0805 04:33:56.425176    9434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/offline-docker-114000/config.json: {Name:mk72105befe86d18a3291201b6943f213dc4d7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:33:56.425413    9434 start.go:360] acquireMachinesLock for offline-docker-114000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:33:56.425446    9434 start.go:364] duration metric: took 24.959µs to acquireMachinesLock for "offline-docker-114000"
	I0805 04:33:56.425457    9434 start.go:93] Provisioning new machine with config: &{Name:offline-docker-114000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-114000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:33:56.425505    9434 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:33:56.433059    9434 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 04:33:56.449222    9434 start.go:159] libmachine.API.Create for "offline-docker-114000" (driver="qemu2")
	I0805 04:33:56.449253    9434 client.go:168] LocalClient.Create starting
	I0805 04:33:56.449331    9434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:33:56.449363    9434 main.go:141] libmachine: Decoding PEM data...
	I0805 04:33:56.449372    9434 main.go:141] libmachine: Parsing certificate...
	I0805 04:33:56.449413    9434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:33:56.449437    9434 main.go:141] libmachine: Decoding PEM data...
	I0805 04:33:56.449447    9434 main.go:141] libmachine: Parsing certificate...
	I0805 04:33:56.449841    9434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:33:56.592694    9434 main.go:141] libmachine: Creating SSH key...
	I0805 04:33:56.676118    9434 main.go:141] libmachine: Creating Disk image...
	I0805 04:33:56.676131    9434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:33:56.676332    9434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2
	I0805 04:33:56.693783    9434 main.go:141] libmachine: STDOUT: 
	I0805 04:33:56.693802    9434 main.go:141] libmachine: STDERR: 
	I0805 04:33:56.693865    9434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2 +20000M
	I0805 04:33:56.702427    9434 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:33:56.702447    9434 main.go:141] libmachine: STDERR: 
	I0805 04:33:56.702489    9434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2
	I0805 04:33:56.702495    9434 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:33:56.702506    9434 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:33:56.702532    9434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:03:50:97:9c:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2
	I0805 04:33:56.704392    9434 main.go:141] libmachine: STDOUT: 
	I0805 04:33:56.704408    9434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:33:56.704430    9434 client.go:171] duration metric: took 255.171042ms to LocalClient.Create
	I0805 04:33:58.706521    9434 start.go:128] duration metric: took 2.280985416s to createHost
	I0805 04:33:58.706545    9434 start.go:83] releasing machines lock for "offline-docker-114000", held for 2.281072708s
	W0805 04:33:58.706565    9434 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:33:58.711091    9434 out.go:177] * Deleting "offline-docker-114000" in qemu2 ...
	W0805 04:33:58.724646    9434 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:33:58.724657    9434 start.go:729] Will try again in 5 seconds ...
	I0805 04:34:03.726953    9434 start.go:360] acquireMachinesLock for offline-docker-114000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:34:03.727376    9434 start.go:364] duration metric: took 344.166µs to acquireMachinesLock for "offline-docker-114000"
	I0805 04:34:03.727529    9434 start.go:93] Provisioning new machine with config: &{Name:offline-docker-114000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-114000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:34:03.727789    9434 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:34:03.736978    9434 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 04:34:03.787118    9434 start.go:159] libmachine.API.Create for "offline-docker-114000" (driver="qemu2")
	I0805 04:34:03.787182    9434 client.go:168] LocalClient.Create starting
	I0805 04:34:03.787298    9434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:34:03.787366    9434 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:03.787381    9434 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:03.787438    9434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:34:03.787485    9434 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:03.787502    9434 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:03.788060    9434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:34:03.946696    9434 main.go:141] libmachine: Creating SSH key...
	I0805 04:34:04.045415    9434 main.go:141] libmachine: Creating Disk image...
	I0805 04:34:04.045422    9434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:34:04.045900    9434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2
	I0805 04:34:04.054997    9434 main.go:141] libmachine: STDOUT: 
	I0805 04:34:04.055015    9434 main.go:141] libmachine: STDERR: 
	I0805 04:34:04.055067    9434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2 +20000M
	I0805 04:34:04.062945    9434 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:34:04.062963    9434 main.go:141] libmachine: STDERR: 
	I0805 04:34:04.062975    9434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2
	I0805 04:34:04.062979    9434 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:34:04.062986    9434 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:34:04.063032    9434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:75:e4:1e:d7:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/offline-docker-114000/disk.qcow2
	I0805 04:34:04.064584    9434 main.go:141] libmachine: STDOUT: 
	I0805 04:34:04.064598    9434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:34:04.064617    9434 client.go:171] duration metric: took 277.425833ms to LocalClient.Create
	I0805 04:34:06.066812    9434 start.go:128] duration metric: took 2.338967917s to createHost
	I0805 04:34:06.066866    9434 start.go:83] releasing machines lock for "offline-docker-114000", held for 2.339439542s
	W0805 04:34:06.067219    9434 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-114000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-114000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:34:06.077768    9434 out.go:177] 
	W0805 04:34:06.085989    9434 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:34:06.086021    9434 out.go:239] * 
	* 
	W0805 04:34:06.089046    9434 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:34:06.098816    9434 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-114000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-05 04:34:06.120825 -0700 PDT m=+718.696527209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-114000 -n offline-docker-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-114000 -n offline-docker-114000: exit status 7 (63.357041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-114000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-114000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-114000
--- FAIL: TestOffline (9.95s)

                                                
                                    
x
+
TestAddons/Setup (10.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-939000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-939000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.525832125s)

                                                
                                                
-- stdout --
	* [addons-939000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-939000" primary control-plane node in "addons-939000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-939000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:22:39.618454    7742 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:22:39.618587    7742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:22:39.618591    7742 out.go:304] Setting ErrFile to fd 2...
	I0805 04:22:39.618593    7742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:22:39.618741    7742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:22:39.619827    7742 out.go:298] Setting JSON to false
	I0805 04:22:39.636018    7742 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4929,"bootTime":1722852030,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:22:39.636092    7742 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:22:39.639888    7742 out.go:177] * [addons-939000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:22:39.646905    7742 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:22:39.646928    7742 notify.go:220] Checking for updates...
	I0805 04:22:39.652876    7742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:22:39.655863    7742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:22:39.657354    7742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:22:39.660873    7742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:22:39.663884    7742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:22:39.667054    7742 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:22:39.670820    7742 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:22:39.677819    7742 start.go:297] selected driver: qemu2
	I0805 04:22:39.677826    7742 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:22:39.677832    7742 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:22:39.680150    7742 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:22:39.683820    7742 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:22:39.686922    7742 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:22:39.686964    7742 cni.go:84] Creating CNI manager for ""
	I0805 04:22:39.686978    7742 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:22:39.686987    7742 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:22:39.687014    7742 start.go:340] cluster config:
	{Name:addons-939000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:22:39.690657    7742 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:22:39.697902    7742 out.go:177] * Starting "addons-939000" primary control-plane node in "addons-939000" cluster
	I0805 04:22:39.701873    7742 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:22:39.701893    7742 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:22:39.701905    7742 cache.go:56] Caching tarball of preloaded images
	I0805 04:22:39.701961    7742 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:22:39.701968    7742 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:22:39.702176    7742 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/addons-939000/config.json ...
	I0805 04:22:39.702187    7742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/addons-939000/config.json: {Name:mkf756a90aeda0365ed449869653f975cf7ca138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:22:39.702616    7742 start.go:360] acquireMachinesLock for addons-939000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:22:39.702676    7742 start.go:364] duration metric: took 54.875µs to acquireMachinesLock for "addons-939000"
	I0805 04:22:39.702686    7742 start.go:93] Provisioning new machine with config: &{Name:addons-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:22:39.702717    7742 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:22:39.710856    7742 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0805 04:22:39.728423    7742 start.go:159] libmachine.API.Create for "addons-939000" (driver="qemu2")
	I0805 04:22:39.728450    7742 client.go:168] LocalClient.Create starting
	I0805 04:22:39.728582    7742 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:22:39.984959    7742 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:22:40.211629    7742 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:22:40.498879    7742 main.go:141] libmachine: Creating SSH key...
	I0805 04:22:40.626767    7742 main.go:141] libmachine: Creating Disk image...
	I0805 04:22:40.626789    7742 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:22:40.627008    7742 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2
	I0805 04:22:40.637286    7742 main.go:141] libmachine: STDOUT: 
	I0805 04:22:40.637303    7742 main.go:141] libmachine: STDERR: 
	I0805 04:22:40.637346    7742 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2 +20000M
	I0805 04:22:40.645270    7742 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:22:40.645294    7742 main.go:141] libmachine: STDERR: 
	I0805 04:22:40.645311    7742 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2
	I0805 04:22:40.645316    7742 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:22:40.645344    7742 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:22:40.645427    7742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:80:f6:50:7b:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2
	I0805 04:22:40.647136    7742 main.go:141] libmachine: STDOUT: 
	I0805 04:22:40.647170    7742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:22:40.647196    7742 client.go:171] duration metric: took 918.739208ms to LocalClient.Create
	I0805 04:22:42.649359    7742 start.go:128] duration metric: took 2.946641292s to createHost
	I0805 04:22:42.649419    7742 start.go:83] releasing machines lock for "addons-939000", held for 2.946754667s
	W0805 04:22:42.649477    7742 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:22:42.660553    7742 out.go:177] * Deleting "addons-939000" in qemu2 ...
	W0805 04:22:42.687850    7742 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:22:42.687880    7742 start.go:729] Will try again in 5 seconds ...
	I0805 04:22:47.690134    7742 start.go:360] acquireMachinesLock for addons-939000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:22:47.690575    7742 start.go:364] duration metric: took 345.833µs to acquireMachinesLock for "addons-939000"
	I0805 04:22:47.690694    7742 start.go:93] Provisioning new machine with config: &{Name:addons-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:22:47.691023    7742 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:22:47.703714    7742 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0805 04:22:47.753687    7742 start.go:159] libmachine.API.Create for "addons-939000" (driver="qemu2")
	I0805 04:22:47.753730    7742 client.go:168] LocalClient.Create starting
	I0805 04:22:47.753860    7742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:22:47.753923    7742 main.go:141] libmachine: Decoding PEM data...
	I0805 04:22:47.753945    7742 main.go:141] libmachine: Parsing certificate...
	I0805 04:22:47.754017    7742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:22:47.754066    7742 main.go:141] libmachine: Decoding PEM data...
	I0805 04:22:47.754079    7742 main.go:141] libmachine: Parsing certificate...
	I0805 04:22:47.754821    7742 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:22:47.953381    7742 main.go:141] libmachine: Creating SSH key...
	I0805 04:22:48.053729    7742 main.go:141] libmachine: Creating Disk image...
	I0805 04:22:48.053734    7742 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:22:48.053927    7742 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2
	I0805 04:22:48.063815    7742 main.go:141] libmachine: STDOUT: 
	I0805 04:22:48.063830    7742 main.go:141] libmachine: STDERR: 
	I0805 04:22:48.063874    7742 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2 +20000M
	I0805 04:22:48.071602    7742 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:22:48.071619    7742 main.go:141] libmachine: STDERR: 
	I0805 04:22:48.071631    7742 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2
	I0805 04:22:48.071634    7742 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:22:48.071645    7742 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:22:48.071669    7742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:f2:b6:48:57:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/addons-939000/disk.qcow2
	I0805 04:22:48.073382    7742 main.go:141] libmachine: STDOUT: 
	I0805 04:22:48.073394    7742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:22:48.073409    7742 client.go:171] duration metric: took 319.675458ms to LocalClient.Create
	I0805 04:22:50.075666    7742 start.go:128] duration metric: took 2.384601042s to createHost
	I0805 04:22:50.075747    7742 start.go:83] releasing machines lock for "addons-939000", held for 2.385163916s
	W0805 04:22:50.076079    7742 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:22:50.084608    7742 out.go:177] 
	W0805 04:22:50.091745    7742 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:22:50.091771    7742 out.go:239] * 
	* 
	W0805 04:22:50.094475    7742 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:22:50.101557    7742 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-939000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.53s)

                                                
                                    
x
+
TestCertOptions (10.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-155000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-155000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.862522209s)

                                                
                                                
-- stdout --
	* [cert-options-155000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-155000" primary control-plane node in "cert-options-155000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-155000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-155000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-155000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-155000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.294208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-155000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-155000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-155000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-155000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-155000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-155000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.390792ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-155000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-155000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-155000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-155000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-155000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-05 04:34:36.846628 -0700 PDT m=+749.422031959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-155000 -n cert-options-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-155000 -n cert-options-155000: exit status 7 (30.1015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-155000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-155000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-155000
--- FAIL: TestCertOptions (10.12s)

                                                
                                    
x
+
TestCertExpiration (195.17s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-871000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-871000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.827764958s)

                                                
                                                
-- stdout --
	* [cert-expiration-871000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-871000" primary control-plane node in "cert-expiration-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-871000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-871000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-871000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.221257292s)

                                                
                                                
-- stdout --
	* [cert-expiration-871000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-871000" primary control-plane node in "cert-expiration-871000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-871000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-871000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-871000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-871000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-871000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-871000" primary control-plane node in "cert-expiration-871000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-871000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-871000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-871000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-05 04:37:36.764065 -0700 PDT m=+929.337722293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-871000 -n cert-expiration-871000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-871000 -n cert-expiration-871000: exit status 7 (33.588583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-871000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-871000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-871000
--- FAIL: TestCertExpiration (195.17s)

                                                
                                    
x
+
TestDockerFlags (10.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-390000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-390000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.055281041s)

                                                
                                                
-- stdout --
	* [docker-flags-390000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-390000" primary control-plane node in "docker-flags-390000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-390000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:34:16.576780    9621 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:34:16.576901    9621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:34:16.576904    9621 out.go:304] Setting ErrFile to fd 2...
	I0805 04:34:16.576911    9621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:34:16.577025    9621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:34:16.578111    9621 out.go:298] Setting JSON to false
	I0805 04:34:16.594290    9621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5626,"bootTime":1722852030,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:34:16.594363    9621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:34:16.599200    9621 out.go:177] * [docker-flags-390000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:34:16.607201    9621 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:34:16.607245    9621 notify.go:220] Checking for updates...
	I0805 04:34:16.614149    9621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:34:16.617192    9621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:34:16.620214    9621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:34:16.623078    9621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:34:16.626142    9621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:34:16.629593    9621 config.go:182] Loaded profile config "force-systemd-flag-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:34:16.629662    9621 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:34:16.629706    9621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:34:16.633061    9621 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:34:16.640188    9621 start.go:297] selected driver: qemu2
	I0805 04:34:16.640196    9621 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:34:16.640203    9621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:34:16.642460    9621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:34:16.643742    9621 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:34:16.646258    9621 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0805 04:34:16.646278    9621 cni.go:84] Creating CNI manager for ""
	I0805 04:34:16.646292    9621 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:34:16.646297    9621 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:34:16.646340    9621 start.go:340] cluster config:
	{Name:docker-flags-390000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:34:16.650091    9621 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:34:16.658149    9621 out.go:177] * Starting "docker-flags-390000" primary control-plane node in "docker-flags-390000" cluster
	I0805 04:34:16.662133    9621 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:34:16.662149    9621 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:34:16.662165    9621 cache.go:56] Caching tarball of preloaded images
	I0805 04:34:16.662235    9621 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:34:16.662249    9621 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:34:16.662315    9621 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/docker-flags-390000/config.json ...
	I0805 04:34:16.662326    9621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/docker-flags-390000/config.json: {Name:mkd36751259989069ee6820eefd35a3c5624b00e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:34:16.662545    9621 start.go:360] acquireMachinesLock for docker-flags-390000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:34:16.662580    9621 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "docker-flags-390000"
	I0805 04:34:16.662591    9621 start.go:93] Provisioning new machine with config: &{Name:docker-flags-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:34:16.662622    9621 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:34:16.671183    9621 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 04:34:16.688798    9621 start.go:159] libmachine.API.Create for "docker-flags-390000" (driver="qemu2")
	I0805 04:34:16.688830    9621 client.go:168] LocalClient.Create starting
	I0805 04:34:16.688898    9621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:34:16.688929    9621 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:16.688942    9621 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:16.688986    9621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:34:16.689010    9621 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:16.689016    9621 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:16.689420    9621 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:34:16.832827    9621 main.go:141] libmachine: Creating SSH key...
	I0805 04:34:17.099962    9621 main.go:141] libmachine: Creating Disk image...
	I0805 04:34:17.099969    9621 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:34:17.100214    9621 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2
	I0805 04:34:17.109980    9621 main.go:141] libmachine: STDOUT: 
	I0805 04:34:17.109998    9621 main.go:141] libmachine: STDERR: 
	I0805 04:34:17.110047    9621 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2 +20000M
	I0805 04:34:17.117900    9621 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:34:17.117911    9621 main.go:141] libmachine: STDERR: 
	I0805 04:34:17.117920    9621 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2
	I0805 04:34:17.117924    9621 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:34:17.117938    9621 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:34:17.117972    9621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:59:83:bb:22:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2
	I0805 04:34:17.119616    9621 main.go:141] libmachine: STDOUT: 
	I0805 04:34:17.119634    9621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:34:17.119651    9621 client.go:171] duration metric: took 430.811833ms to LocalClient.Create
	I0805 04:34:19.121830    9621 start.go:128] duration metric: took 2.459166458s to createHost
	I0805 04:34:19.121867    9621 start.go:83] releasing machines lock for "docker-flags-390000", held for 2.459254292s
	W0805 04:34:19.121967    9621 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:34:19.143987    9621 out.go:177] * Deleting "docker-flags-390000" in qemu2 ...
	W0805 04:34:19.161776    9621 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:34:19.161796    9621 start.go:729] Will try again in 5 seconds ...
	I0805 04:34:24.163998    9621 start.go:360] acquireMachinesLock for docker-flags-390000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:34:24.164390    9621 start.go:364] duration metric: took 328µs to acquireMachinesLock for "docker-flags-390000"
	I0805 04:34:24.164510    9621 start.go:93] Provisioning new machine with config: &{Name:docker-flags-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:34:24.164787    9621 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:34:24.172437    9621 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 04:34:24.222671    9621 start.go:159] libmachine.API.Create for "docker-flags-390000" (driver="qemu2")
	I0805 04:34:24.222718    9621 client.go:168] LocalClient.Create starting
	I0805 04:34:24.222841    9621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:34:24.222906    9621 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:24.222925    9621 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:24.222990    9621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:34:24.223035    9621 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:24.223052    9621 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:24.224179    9621 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:34:24.381547    9621 main.go:141] libmachine: Creating SSH key...
	I0805 04:34:24.536427    9621 main.go:141] libmachine: Creating Disk image...
	I0805 04:34:24.536436    9621 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:34:24.536609    9621 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2
	I0805 04:34:24.545801    9621 main.go:141] libmachine: STDOUT: 
	I0805 04:34:24.545824    9621 main.go:141] libmachine: STDERR: 
	I0805 04:34:24.545869    9621 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2 +20000M
	I0805 04:34:24.553710    9621 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:34:24.553739    9621 main.go:141] libmachine: STDERR: 
	I0805 04:34:24.553752    9621 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2
	I0805 04:34:24.553766    9621 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:34:24.553778    9621 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:34:24.553820    9621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:bd:10:2b:54:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/docker-flags-390000/disk.qcow2
	I0805 04:34:24.555533    9621 main.go:141] libmachine: STDOUT: 
	I0805 04:34:24.555550    9621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:34:24.555566    9621 client.go:171] duration metric: took 332.840541ms to LocalClient.Create
	I0805 04:34:26.557786    9621 start.go:128] duration metric: took 2.392933s to createHost
	I0805 04:34:26.557849    9621 start.go:83] releasing machines lock for "docker-flags-390000", held for 2.3934145s
	W0805 04:34:26.558320    9621 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-390000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-390000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:34:26.571047    9621 out.go:177] 
	W0805 04:34:26.575110    9621 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:34:26.575132    9621 out.go:239] * 
	* 
	W0805 04:34:26.577475    9621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:34:26.589073    9621 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-390000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-390000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-390000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (75.196167ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-390000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-390000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-390000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-390000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-390000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-390000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-390000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-390000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-390000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (42.681208ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-390000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-390000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-390000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-390000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-390000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-390000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-05 04:34:26.725105 -0700 PDT m=+739.300607084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-390000 -n docker-flags-390000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-390000 -n docker-flags-390000: exit status 7 (28.452416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-390000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-390000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-390000
--- FAIL: TestDockerFlags (10.28s)

                                                
                                    
x
+
TestForceSystemdFlag (10.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-992000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-992000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.023312292s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-992000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-992000" primary control-plane node in "force-systemd-flag-992000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-992000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:34:11.522230    9600 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:34:11.522389    9600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:34:11.522393    9600 out.go:304] Setting ErrFile to fd 2...
	I0805 04:34:11.522395    9600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:34:11.522532    9600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:34:11.523611    9600 out.go:298] Setting JSON to false
	I0805 04:34:11.539818    9600 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5621,"bootTime":1722852030,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:34:11.539889    9600 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:34:11.546174    9600 out.go:177] * [force-systemd-flag-992000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:34:11.553181    9600 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:34:11.553226    9600 notify.go:220] Checking for updates...
	I0805 04:34:11.561115    9600 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:34:11.564158    9600 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:34:11.567192    9600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:34:11.570125    9600 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:34:11.573139    9600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:34:11.576449    9600 config.go:182] Loaded profile config "force-systemd-env-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:34:11.576521    9600 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:34:11.576588    9600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:34:11.581099    9600 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:34:11.588184    9600 start.go:297] selected driver: qemu2
	I0805 04:34:11.588188    9600 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:34:11.588194    9600 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:34:11.590573    9600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:34:11.593168    9600 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:34:11.596174    9600 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 04:34:11.596208    9600 cni.go:84] Creating CNI manager for ""
	I0805 04:34:11.596217    9600 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:34:11.596231    9600 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:34:11.596261    9600 start.go:340] cluster config:
	{Name:force-systemd-flag-992000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:34:11.600033    9600 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:34:11.608147    9600 out.go:177] * Starting "force-systemd-flag-992000" primary control-plane node in "force-systemd-flag-992000" cluster
	I0805 04:34:11.612173    9600 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:34:11.612186    9600 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:34:11.612197    9600 cache.go:56] Caching tarball of preloaded images
	I0805 04:34:11.612250    9600 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:34:11.612256    9600 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:34:11.612316    9600 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/force-systemd-flag-992000/config.json ...
	I0805 04:34:11.612327    9600 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/force-systemd-flag-992000/config.json: {Name:mk195970766085b59dfbaebb735349c33df86dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:34:11.612551    9600 start.go:360] acquireMachinesLock for force-systemd-flag-992000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:34:11.612589    9600 start.go:364] duration metric: took 29.334µs to acquireMachinesLock for "force-systemd-flag-992000"
	I0805 04:34:11.612600    9600 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:34:11.612626    9600 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:34:11.621118    9600 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 04:34:11.639451    9600 start.go:159] libmachine.API.Create for "force-systemd-flag-992000" (driver="qemu2")
	I0805 04:34:11.639479    9600 client.go:168] LocalClient.Create starting
	I0805 04:34:11.639547    9600 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:34:11.639578    9600 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:11.639588    9600 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:11.639624    9600 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:34:11.639647    9600 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:11.639655    9600 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:11.640022    9600 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:34:11.783592    9600 main.go:141] libmachine: Creating SSH key...
	I0805 04:34:11.974171    9600 main.go:141] libmachine: Creating Disk image...
	I0805 04:34:11.974178    9600 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:34:11.974391    9600 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0805 04:34:11.984085    9600 main.go:141] libmachine: STDOUT: 
	I0805 04:34:11.984105    9600 main.go:141] libmachine: STDERR: 
	I0805 04:34:11.984154    9600 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2 +20000M
	I0805 04:34:11.992099    9600 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:34:11.992113    9600 main.go:141] libmachine: STDERR: 
	I0805 04:34:11.992134    9600 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0805 04:34:11.992141    9600 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:34:11.992155    9600 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:34:11.992178    9600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:47:0d:da:f3:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0805 04:34:11.993853    9600 main.go:141] libmachine: STDOUT: 
	I0805 04:34:11.993865    9600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:34:11.993893    9600 client.go:171] duration metric: took 354.405333ms to LocalClient.Create
	I0805 04:34:13.996080    9600 start.go:128] duration metric: took 2.383411125s to createHost
	I0805 04:34:13.996144    9600 start.go:83] releasing machines lock for "force-systemd-flag-992000", held for 2.383521416s
	W0805 04:34:13.996207    9600 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:34:14.014175    9600 out.go:177] * Deleting "force-systemd-flag-992000" in qemu2 ...
	W0805 04:34:14.034306    9600 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:34:14.034331    9600 start.go:729] Will try again in 5 seconds ...
	I0805 04:34:19.036604    9600 start.go:360] acquireMachinesLock for force-systemd-flag-992000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:34:19.122042    9600 start.go:364] duration metric: took 85.3105ms to acquireMachinesLock for "force-systemd-flag-992000"
	I0805 04:34:19.122176    9600 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:34:19.122487    9600 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:34:19.132028    9600 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 04:34:19.182147    9600 start.go:159] libmachine.API.Create for "force-systemd-flag-992000" (driver="qemu2")
	I0805 04:34:19.182192    9600 client.go:168] LocalClient.Create starting
	I0805 04:34:19.182314    9600 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:34:19.182379    9600 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:19.182395    9600 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:19.182454    9600 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:34:19.182497    9600 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:19.182508    9600 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:19.183223    9600 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:34:19.339980    9600 main.go:141] libmachine: Creating SSH key...
	I0805 04:34:19.450340    9600 main.go:141] libmachine: Creating Disk image...
	I0805 04:34:19.450345    9600 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:34:19.450563    9600 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0805 04:34:19.460019    9600 main.go:141] libmachine: STDOUT: 
	I0805 04:34:19.460034    9600 main.go:141] libmachine: STDERR: 
	I0805 04:34:19.460076    9600 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2 +20000M
	I0805 04:34:19.467865    9600 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:34:19.467881    9600 main.go:141] libmachine: STDERR: 
	I0805 04:34:19.467893    9600 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0805 04:34:19.467915    9600 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:34:19.467925    9600 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:34:19.467948    9600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:8a:29:f8:24:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0805 04:34:19.469641    9600 main.go:141] libmachine: STDOUT: 
	I0805 04:34:19.469656    9600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:34:19.469669    9600 client.go:171] duration metric: took 287.468292ms to LocalClient.Create
	I0805 04:34:21.471859    9600 start.go:128] duration metric: took 2.349312208s to createHost
	I0805 04:34:21.471982    9600 start.go:83] releasing machines lock for "force-systemd-flag-992000", held for 2.349831208s
	W0805 04:34:21.472286    9600 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-992000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-992000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:34:21.483590    9600 out.go:177] 
	W0805 04:34:21.493202    9600 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:34:21.493237    9600 out.go:239] * 
	* 
	W0805 04:34:21.495877    9600 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:34:21.504728    9600 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-992000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-992000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-992000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.793542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-992000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-992000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-992000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-05 04:34:21.59754 -0700 PDT m=+734.173091834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-992000 -n force-systemd-flag-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-992000 -n force-systemd-flag-992000: exit status 7 (33.969209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-992000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-992000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-992000
--- FAIL: TestForceSystemdFlag (10.21s)

                                                
                                    
x
+
TestForceSystemdEnv (10.29s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-058000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-058000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.091347125s)

                                                
                                                
-- stdout --
	* [force-systemd-env-058000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-058000" primary control-plane node in "force-systemd-env-058000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-058000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:34:06.284453    9568 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:34:06.284586    9568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:34:06.284590    9568 out.go:304] Setting ErrFile to fd 2...
	I0805 04:34:06.284592    9568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:34:06.284738    9568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:34:06.285818    9568 out.go:298] Setting JSON to false
	I0805 04:34:06.302648    9568 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5616,"bootTime":1722852030,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:34:06.302724    9568 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:34:06.307334    9568 out.go:177] * [force-systemd-env-058000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:34:06.314365    9568 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:34:06.314380    9568 notify.go:220] Checking for updates...
	I0805 04:34:06.321313    9568 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:34:06.328292    9568 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:34:06.340323    9568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:34:06.348233    9568 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:34:06.356285    9568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0805 04:34:06.359595    9568 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:34:06.359645    9568 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:34:06.363275    9568 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:34:06.370306    9568 start.go:297] selected driver: qemu2
	I0805 04:34:06.370314    9568 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:34:06.370322    9568 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:34:06.372279    9568 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:34:06.373551    9568 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:34:06.376344    9568 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 04:34:06.376361    9568 cni.go:84] Creating CNI manager for ""
	I0805 04:34:06.376374    9568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:34:06.376380    9568 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:34:06.376421    9568 start.go:340] cluster config:
	{Name:force-systemd-env-058000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:34:06.379739    9568 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:34:06.387279    9568 out.go:177] * Starting "force-systemd-env-058000" primary control-plane node in "force-systemd-env-058000" cluster
	I0805 04:34:06.391270    9568 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:34:06.391293    9568 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:34:06.391307    9568 cache.go:56] Caching tarball of preloaded images
	I0805 04:34:06.391376    9568 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:34:06.391381    9568 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:34:06.391449    9568 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/force-systemd-env-058000/config.json ...
	I0805 04:34:06.391459    9568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/force-systemd-env-058000/config.json: {Name:mkc2148d979e1d77d9b106642fcf5cbfa04a5ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:34:06.391662    9568 start.go:360] acquireMachinesLock for force-systemd-env-058000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:34:06.391694    9568 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "force-systemd-env-058000"
	I0805 04:34:06.391704    9568 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-058000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:34:06.391731    9568 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:34:06.396177    9568 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 04:34:06.411499    9568 start.go:159] libmachine.API.Create for "force-systemd-env-058000" (driver="qemu2")
	I0805 04:34:06.411620    9568 client.go:168] LocalClient.Create starting
	I0805 04:34:06.411688    9568 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:34:06.411717    9568 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:06.411726    9568 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:06.411765    9568 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:34:06.411787    9568 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:06.411797    9568 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:06.412139    9568 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:34:06.557894    9568 main.go:141] libmachine: Creating SSH key...
	I0805 04:34:06.594299    9568 main.go:141] libmachine: Creating Disk image...
	I0805 04:34:06.594308    9568 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:34:06.594513    9568 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2
	I0805 04:34:06.603856    9568 main.go:141] libmachine: STDOUT: 
	I0805 04:34:06.603874    9568 main.go:141] libmachine: STDERR: 
	I0805 04:34:06.603917    9568 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2 +20000M
	I0805 04:34:06.612052    9568 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:34:06.612067    9568 main.go:141] libmachine: STDERR: 
	I0805 04:34:06.612076    9568 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2
	I0805 04:34:06.612081    9568 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:34:06.612094    9568 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:34:06.612129    9568 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:80:74:08:66:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2
	I0805 04:34:06.613848    9568 main.go:141] libmachine: STDOUT: 
	I0805 04:34:06.613864    9568 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:34:06.613881    9568 client.go:171] duration metric: took 202.253959ms to LocalClient.Create
	I0805 04:34:08.616089    9568 start.go:128] duration metric: took 2.224310209s to createHost
	I0805 04:34:08.616145    9568 start.go:83] releasing machines lock for "force-systemd-env-058000", held for 2.224421917s
	W0805 04:34:08.616330    9568 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:34:08.627590    9568 out.go:177] * Deleting "force-systemd-env-058000" in qemu2 ...
	W0805 04:34:08.654305    9568 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:34:08.654350    9568 start.go:729] Will try again in 5 seconds ...
	I0805 04:34:13.656613    9568 start.go:360] acquireMachinesLock for force-systemd-env-058000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:34:13.996294    9568 start.go:364] duration metric: took 339.533291ms to acquireMachinesLock for "force-systemd-env-058000"
	I0805 04:34:13.996459    9568 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-058000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:34:13.996667    9568 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:34:14.007205    9568 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 04:34:14.057807    9568 start.go:159] libmachine.API.Create for "force-systemd-env-058000" (driver="qemu2")
	I0805 04:34:14.057862    9568 client.go:168] LocalClient.Create starting
	I0805 04:34:14.058038    9568 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:34:14.058105    9568 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:14.058124    9568 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:14.058183    9568 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:34:14.058226    9568 main.go:141] libmachine: Decoding PEM data...
	I0805 04:34:14.058239    9568 main.go:141] libmachine: Parsing certificate...
	I0805 04:34:14.058873    9568 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:34:14.215877    9568 main.go:141] libmachine: Creating SSH key...
	I0805 04:34:14.277920    9568 main.go:141] libmachine: Creating Disk image...
	I0805 04:34:14.277926    9568 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:34:14.278149    9568 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2
	I0805 04:34:14.287710    9568 main.go:141] libmachine: STDOUT: 
	I0805 04:34:14.287726    9568 main.go:141] libmachine: STDERR: 
	I0805 04:34:14.287773    9568 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2 +20000M
	I0805 04:34:14.295650    9568 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:34:14.295668    9568 main.go:141] libmachine: STDERR: 
	I0805 04:34:14.295681    9568 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2
	I0805 04:34:14.295685    9568 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:34:14.295692    9568 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:34:14.295718    9568 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ac:87:91:6a:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/force-systemd-env-058000/disk.qcow2
	I0805 04:34:14.297410    9568 main.go:141] libmachine: STDOUT: 
	I0805 04:34:14.297427    9568 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:34:14.297439    9568 client.go:171] duration metric: took 239.569958ms to LocalClient.Create
	I0805 04:34:16.298468    9568 start.go:128] duration metric: took 2.301748416s to createHost
	I0805 04:34:16.298524    9568 start.go:83] releasing machines lock for "force-systemd-env-058000", held for 2.302184833s
	W0805 04:34:16.298865    9568 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-058000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-058000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:34:16.312337    9568 out.go:177] 
	W0805 04:34:16.320272    9568 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:34:16.320290    9568 out.go:239] * 
	* 
	W0805 04:34:16.322609    9568 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:34:16.333156    9568 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-058000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-058000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-058000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.434875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-058000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-058000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-058000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-05 04:34:16.430373 -0700 PDT m=+729.005974793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-058000 -n force-systemd-env-058000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-058000 -n force-systemd-env-058000: exit status 7 (34.257375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-058000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-058000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-058000
--- FAIL: TestForceSystemdEnv (10.29s)

                                                
                                    
x
+
TestErrorSpam/setup (9.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-993000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-993000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 --driver=qemu2 : exit status 80 (9.881318959s)

                                                
                                                
-- stdout --
	* [nospam-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-993000" primary control-plane node in "nospam-993000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-993000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-993000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-993000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19377
- KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-993000" primary control-plane node in "nospam-993000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-993000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.88s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-814000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-814000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (10.013856166s)

                                                
                                                
-- stdout --
	* [functional-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-814000" primary control-plane node in "functional-814000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-814000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51045 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51045 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51045 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-814000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-814000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19377
- KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-814000" primary control-plane node in "functional-814000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-814000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51045 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51045 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51045 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-814000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (68.594333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.08s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-814000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-814000 --alsologtostderr -v=8: exit status 80 (5.182629875s)

                                                
                                                
-- stdout --
	* [functional-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-814000" primary control-plane node in "functional-814000" cluster
	* Restarting existing qemu2 VM for "functional-814000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-814000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:23:19.890614    7889 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:23:19.890750    7889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:23:19.890753    7889 out.go:304] Setting ErrFile to fd 2...
	I0805 04:23:19.890756    7889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:23:19.890904    7889 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:23:19.891913    7889 out.go:298] Setting JSON to false
	I0805 04:23:19.907977    7889 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4969,"bootTime":1722852030,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:23:19.908045    7889 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:23:19.913246    7889 out.go:177] * [functional-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:23:19.920061    7889 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:23:19.920112    7889 notify.go:220] Checking for updates...
	I0805 04:23:19.927152    7889 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:23:19.928679    7889 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:23:19.932149    7889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:23:19.935149    7889 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:23:19.938196    7889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:23:19.941365    7889 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:23:19.941417    7889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:23:19.946180    7889 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:23:19.953125    7889 start.go:297] selected driver: qemu2
	I0805 04:23:19.953131    7889 start.go:901] validating driver "qemu2" against &{Name:functional-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:23:19.953180    7889 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:23:19.955618    7889 cni.go:84] Creating CNI manager for ""
	I0805 04:23:19.955639    7889 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:23:19.955681    7889 start.go:340] cluster config:
	{Name:functional-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-814000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:23:19.959408    7889 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:23:19.967144    7889 out.go:177] * Starting "functional-814000" primary control-plane node in "functional-814000" cluster
	I0805 04:23:19.971281    7889 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:23:19.971298    7889 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:23:19.971309    7889 cache.go:56] Caching tarball of preloaded images
	I0805 04:23:19.971369    7889 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:23:19.971376    7889 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:23:19.971436    7889 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/functional-814000/config.json ...
	I0805 04:23:19.971942    7889 start.go:360] acquireMachinesLock for functional-814000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:23:19.971976    7889 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "functional-814000"
	I0805 04:23:19.971985    7889 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:23:19.971989    7889 fix.go:54] fixHost starting: 
	I0805 04:23:19.972112    7889 fix.go:112] recreateIfNeeded on functional-814000: state=Stopped err=<nil>
	W0805 04:23:19.972120    7889 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:23:19.978164    7889 out.go:177] * Restarting existing qemu2 VM for "functional-814000" ...
	I0805 04:23:19.982184    7889 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:23:19.982227    7889 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9c:c1:45:4c:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/disk.qcow2
	I0805 04:23:19.984267    7889 main.go:141] libmachine: STDOUT: 
	I0805 04:23:19.984285    7889 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:23:19.984317    7889 fix.go:56] duration metric: took 12.326541ms for fixHost
	I0805 04:23:19.984321    7889 start.go:83] releasing machines lock for "functional-814000", held for 12.3405ms
	W0805 04:23:19.984329    7889 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:23:19.984369    7889 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:23:19.984374    7889 start.go:729] Will try again in 5 seconds ...
	I0805 04:23:24.986608    7889 start.go:360] acquireMachinesLock for functional-814000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:23:24.987057    7889 start.go:364] duration metric: took 336.5µs to acquireMachinesLock for "functional-814000"
	I0805 04:23:24.987191    7889 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:23:24.987212    7889 fix.go:54] fixHost starting: 
	I0805 04:23:24.987947    7889 fix.go:112] recreateIfNeeded on functional-814000: state=Stopped err=<nil>
	W0805 04:23:24.987975    7889 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:23:24.995431    7889 out.go:177] * Restarting existing qemu2 VM for "functional-814000" ...
	I0805 04:23:24.999455    7889 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:23:24.999687    7889 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9c:c1:45:4c:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/disk.qcow2
	I0805 04:23:25.009049    7889 main.go:141] libmachine: STDOUT: 
	I0805 04:23:25.009109    7889 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:23:25.009198    7889 fix.go:56] duration metric: took 21.98975ms for fixHost
	I0805 04:23:25.009216    7889 start.go:83] releasing machines lock for "functional-814000", held for 22.133209ms
	W0805 04:23:25.009398    7889 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-814000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-814000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:23:25.016425    7889 out.go:177] 
	W0805 04:23:25.019419    7889 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:23:25.019474    7889 out.go:239] * 
	* 
	W0805 04:23:25.022082    7889 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:23:25.029369    7889 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-814000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.184489709s for "functional-814000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (67.819958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.019916ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-814000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (29.410291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-814000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-814000 get po -A: exit status 1 (26.647292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-814000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-814000\n"*: args "kubectl --context functional-814000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-814000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (30.18ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh sudo crictl images: exit status 83 (44.830667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-814000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (38.384292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-814000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (38.427875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.652542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-814000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 kubectl -- --context functional-814000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 kubectl -- --context functional-814000 get pods: exit status 1 (701.299917ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-814000
	* no server found for cluster "functional-814000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-814000 kubectl -- --context functional-814000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (31.805708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-814000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-814000 get pods: exit status 1 (944.675459ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-814000
	* no server found for cluster "functional-814000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-814000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (910.037583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.86s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-814000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-814000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.191935583s)

                                                
                                                
-- stdout --
	* [functional-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-814000" primary control-plane node in "functional-814000" cluster
	* Restarting existing qemu2 VM for "functional-814000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-814000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-814000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-814000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.192795167s for "functional-814000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (67.55175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-814000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-814000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.658083ms)

                                                
                                                
** stderr ** 
	error: context "functional-814000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-814000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (29.474792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 logs: exit status 83 (75.631166ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | -p download-only-095000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| delete  | -p download-only-095000                                                  | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| start   | -o=json --download-only                                                  | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | -p download-only-741000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| delete  | -p download-only-741000                                                  | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| start   | -o=json --download-only                                                  | download-only-638000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | -p download-only-638000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| delete  | -p download-only-638000                                                  | download-only-638000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| delete  | -p download-only-095000                                                  | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| delete  | -p download-only-741000                                                  | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| delete  | -p download-only-638000                                                  | download-only-638000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| start   | --download-only -p                                                       | binary-mirror-737000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | binary-mirror-737000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51013                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-737000                                                  | binary-mirror-737000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| addons  | enable dashboard -p                                                      | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | addons-939000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | addons-939000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-939000 --wait=true                                             | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-939000                                                         | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| start   | -p nospam-993000 -n=1 --memory=2250 --wait=false                         | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-993000                                                         | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	| start   | -p functional-814000                                                     | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-814000                                                     | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | minikube-local-cache-test:functional-814000                              |                      |         |         |                     |                     |
	| cache   | functional-814000 cache delete                                           | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | minikube-local-cache-test:functional-814000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	| ssh     | functional-814000 ssh sudo                                               | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-814000                                                        | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-814000 ssh                                                    | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-814000 cache reload                                           | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	| ssh     | functional-814000 ssh                                                    | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-814000 kubectl --                                             | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | --context functional-814000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-814000                                                     | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 04:23:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 04:23:30.953562    7967 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:23:30.953684    7967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:23:30.953685    7967 out.go:304] Setting ErrFile to fd 2...
	I0805 04:23:30.953687    7967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:23:30.953819    7967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:23:30.954876    7967 out.go:298] Setting JSON to false
	I0805 04:23:30.970849    7967 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4980,"bootTime":1722852030,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:23:30.970915    7967 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:23:30.977269    7967 out.go:177] * [functional-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:23:30.986222    7967 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:23:30.986271    7967 notify.go:220] Checking for updates...
	I0805 04:23:30.995110    7967 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:23:30.998192    7967 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:23:31.001161    7967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:23:31.004148    7967 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:23:31.007189    7967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:23:31.010424    7967 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:23:31.010473    7967 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:23:31.015172    7967 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:23:31.022218    7967 start.go:297] selected driver: qemu2
	I0805 04:23:31.022223    7967 start.go:901] validating driver "qemu2" against &{Name:functional-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:23:31.022285    7967 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:23:31.024541    7967 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:23:31.024561    7967 cni.go:84] Creating CNI manager for ""
	I0805 04:23:31.024567    7967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:23:31.024613    7967 start.go:340] cluster config:
	{Name:functional-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-814000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:23:31.028055    7967 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:23:31.036163    7967 out.go:177] * Starting "functional-814000" primary control-plane node in "functional-814000" cluster
	I0805 04:23:31.039142    7967 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:23:31.039155    7967 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:23:31.039167    7967 cache.go:56] Caching tarball of preloaded images
	I0805 04:23:31.039223    7967 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:23:31.039228    7967 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:23:31.039287    7967 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/functional-814000/config.json ...
	I0805 04:23:31.039774    7967 start.go:360] acquireMachinesLock for functional-814000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:23:31.039808    7967 start.go:364] duration metric: took 30.417µs to acquireMachinesLock for "functional-814000"
	I0805 04:23:31.039815    7967 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:23:31.039819    7967 fix.go:54] fixHost starting: 
	I0805 04:23:31.039938    7967 fix.go:112] recreateIfNeeded on functional-814000: state=Stopped err=<nil>
	W0805 04:23:31.039944    7967 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:23:31.048057    7967 out.go:177] * Restarting existing qemu2 VM for "functional-814000" ...
	I0805 04:23:31.052142    7967 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:23:31.052178    7967 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9c:c1:45:4c:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/disk.qcow2
	I0805 04:23:31.054229    7967 main.go:141] libmachine: STDOUT: 
	I0805 04:23:31.054247    7967 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:23:31.054275    7967 fix.go:56] duration metric: took 14.456416ms for fixHost
	I0805 04:23:31.054278    7967 start.go:83] releasing machines lock for "functional-814000", held for 14.466625ms
	W0805 04:23:31.054284    7967 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:23:31.054319    7967 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:23:31.054324    7967 start.go:729] Will try again in 5 seconds ...
	I0805 04:23:36.056483    7967 start.go:360] acquireMachinesLock for functional-814000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:23:36.056819    7967 start.go:364] duration metric: took 300.709µs to acquireMachinesLock for "functional-814000"
	I0805 04:23:36.056948    7967 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:23:36.056960    7967 fix.go:54] fixHost starting: 
	I0805 04:23:36.057644    7967 fix.go:112] recreateIfNeeded on functional-814000: state=Stopped err=<nil>
	W0805 04:23:36.057664    7967 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:23:36.065985    7967 out.go:177] * Restarting existing qemu2 VM for "functional-814000" ...
	I0805 04:23:36.069021    7967 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:23:36.069161    7967 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9c:c1:45:4c:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/disk.qcow2
	I0805 04:23:36.078211    7967 main.go:141] libmachine: STDOUT: 
	I0805 04:23:36.078274    7967 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:23:36.078369    7967 fix.go:56] duration metric: took 21.40775ms for fixHost
	I0805 04:23:36.078382    7967 start.go:83] releasing machines lock for "functional-814000", held for 21.547125ms
	W0805 04:23:36.078579    7967 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-814000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:23:36.086042    7967 out.go:177] 
	W0805 04:23:36.090089    7967 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:23:36.090120    7967 out.go:239] * 
	W0805 04:23:36.092675    7967 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:23:36.099478    7967 out.go:177] 
	
	
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-814000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | -p download-only-095000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-095000                                                  | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| start   | -o=json --download-only                                                  | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | -p download-only-741000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-741000                                                  | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| start   | -o=json --download-only                                                  | download-only-638000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | -p download-only-638000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-638000                                                  | download-only-638000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-095000                                                  | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-741000                                                  | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-638000                                                  | download-only-638000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| start   | --download-only -p                                                       | binary-mirror-737000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | binary-mirror-737000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51013                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-737000                                                  | binary-mirror-737000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| addons  | enable dashboard -p                                                      | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | addons-939000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | addons-939000                                                            |                      |         |         |                     |                     |
| start   | -p addons-939000 --wait=true                                             | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-939000                                                         | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| start   | -p nospam-993000 -n=1 --memory=2250 --wait=false                         | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-993000                                                         | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
| start   | -p functional-814000                                                     | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-814000                                                     | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | minikube-local-cache-test:functional-814000                              |                      |         |         |                     |                     |
| cache   | functional-814000 cache delete                                           | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | minikube-local-cache-test:functional-814000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
| ssh     | functional-814000 ssh sudo                                               | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-814000                                                        | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-814000 ssh                                                    | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-814000 cache reload                                           | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
| ssh     | functional-814000 ssh                                                    | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-814000 kubectl --                                             | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | --context functional-814000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-814000                                                     | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/05 04:23:30
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0805 04:23:30.953562    7967 out.go:291] Setting OutFile to fd 1 ...
I0805 04:23:30.953684    7967 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:23:30.953685    7967 out.go:304] Setting ErrFile to fd 2...
I0805 04:23:30.953687    7967 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:23:30.953819    7967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:23:30.954876    7967 out.go:298] Setting JSON to false
I0805 04:23:30.970849    7967 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4980,"bootTime":1722852030,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0805 04:23:30.970915    7967 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0805 04:23:30.977269    7967 out.go:177] * [functional-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0805 04:23:30.986222    7967 out.go:177]   - MINIKUBE_LOCATION=19377
I0805 04:23:30.986271    7967 notify.go:220] Checking for updates...
I0805 04:23:30.995110    7967 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
I0805 04:23:30.998192    7967 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0805 04:23:31.001161    7967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0805 04:23:31.004148    7967 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
I0805 04:23:31.007189    7967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0805 04:23:31.010424    7967 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:23:31.010473    7967 driver.go:392] Setting default libvirt URI to qemu:///system
I0805 04:23:31.015172    7967 out.go:177] * Using the qemu2 driver based on existing profile
I0805 04:23:31.022218    7967 start.go:297] selected driver: qemu2
I0805 04:23:31.022223    7967 start.go:901] validating driver "qemu2" against &{Name:functional-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 04:23:31.022285    7967 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0805 04:23:31.024541    7967 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0805 04:23:31.024561    7967 cni.go:84] Creating CNI manager for ""
I0805 04:23:31.024567    7967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 04:23:31.024613    7967 start.go:340] cluster config:
{Name:functional-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-814000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 04:23:31.028055    7967 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 04:23:31.036163    7967 out.go:177] * Starting "functional-814000" primary control-plane node in "functional-814000" cluster
I0805 04:23:31.039142    7967 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0805 04:23:31.039155    7967 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0805 04:23:31.039167    7967 cache.go:56] Caching tarball of preloaded images
I0805 04:23:31.039223    7967 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0805 04:23:31.039228    7967 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0805 04:23:31.039287    7967 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/functional-814000/config.json ...
I0805 04:23:31.039774    7967 start.go:360] acquireMachinesLock for functional-814000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 04:23:31.039808    7967 start.go:364] duration metric: took 30.417µs to acquireMachinesLock for "functional-814000"
I0805 04:23:31.039815    7967 start.go:96] Skipping create...Using existing machine configuration
I0805 04:23:31.039819    7967 fix.go:54] fixHost starting: 
I0805 04:23:31.039938    7967 fix.go:112] recreateIfNeeded on functional-814000: state=Stopped err=<nil>
W0805 04:23:31.039944    7967 fix.go:138] unexpected machine state, will restart: <nil>
I0805 04:23:31.048057    7967 out.go:177] * Restarting existing qemu2 VM for "functional-814000" ...
I0805 04:23:31.052142    7967 qemu.go:418] Using hvf for hardware acceleration
I0805 04:23:31.052178    7967 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9c:c1:45:4c:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/disk.qcow2
I0805 04:23:31.054229    7967 main.go:141] libmachine: STDOUT: 
I0805 04:23:31.054247    7967 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0805 04:23:31.054275    7967 fix.go:56] duration metric: took 14.456416ms for fixHost
I0805 04:23:31.054278    7967 start.go:83] releasing machines lock for "functional-814000", held for 14.466625ms
W0805 04:23:31.054284    7967 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0805 04:23:31.054319    7967 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0805 04:23:31.054324    7967 start.go:729] Will try again in 5 seconds ...
I0805 04:23:36.056483    7967 start.go:360] acquireMachinesLock for functional-814000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 04:23:36.056819    7967 start.go:364] duration metric: took 300.709µs to acquireMachinesLock for "functional-814000"
I0805 04:23:36.056948    7967 start.go:96] Skipping create...Using existing machine configuration
I0805 04:23:36.056960    7967 fix.go:54] fixHost starting: 
I0805 04:23:36.057644    7967 fix.go:112] recreateIfNeeded on functional-814000: state=Stopped err=<nil>
W0805 04:23:36.057664    7967 fix.go:138] unexpected machine state, will restart: <nil>
I0805 04:23:36.065985    7967 out.go:177] * Restarting existing qemu2 VM for "functional-814000" ...
I0805 04:23:36.069021    7967 qemu.go:418] Using hvf for hardware acceleration
I0805 04:23:36.069161    7967 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9c:c1:45:4c:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/disk.qcow2
I0805 04:23:36.078211    7967 main.go:141] libmachine: STDOUT: 
I0805 04:23:36.078274    7967 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0805 04:23:36.078369    7967 fix.go:56] duration metric: took 21.40775ms for fixHost
I0805 04:23:36.078382    7967 start.go:83] releasing machines lock for "functional-814000", held for 21.547125ms
W0805 04:23:36.078579    7967 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-814000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0805 04:23:36.086042    7967 out.go:177] 
W0805 04:23:36.090089    7967 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0805 04:23:36.090120    7967 out.go:239] * 
W0805 04:23:36.092675    7967 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0805 04:23:36.099478    7967 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2762194525/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | -p download-only-095000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-095000                                                  | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| start   | -o=json --download-only                                                  | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | -p download-only-741000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-741000                                                  | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| start   | -o=json --download-only                                                  | download-only-638000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | -p download-only-638000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-638000                                                  | download-only-638000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-095000                                                  | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-741000                                                  | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| delete  | -p download-only-638000                                                  | download-only-638000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| start   | --download-only -p                                                       | binary-mirror-737000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | binary-mirror-737000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51013                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-737000                                                  | binary-mirror-737000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| addons  | enable dashboard -p                                                      | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | addons-939000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | addons-939000                                                            |                      |         |         |                     |                     |
| start   | -p addons-939000 --wait=true                                             | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-939000                                                         | addons-939000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
| start   | -p nospam-993000 -n=1 --memory=2250 --wait=false                         | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-993000 --log_dir                                                  | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-993000                                                         | nospam-993000        | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
| start   | -p functional-814000                                                     | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-814000                                                     | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-814000 cache add                                              | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | minikube-local-cache-test:functional-814000                              |                      |         |         |                     |                     |
| cache   | functional-814000 cache delete                                           | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | minikube-local-cache-test:functional-814000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
| ssh     | functional-814000 ssh sudo                                               | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-814000                                                        | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-814000 ssh                                                    | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-814000 cache reload                                           | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
| ssh     | functional-814000 ssh                                                    | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT | 05 Aug 24 04:23 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-814000 kubectl --                                             | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | --context functional-814000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-814000                                                     | functional-814000    | jenkins | v1.33.1 | 05 Aug 24 04:23 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/05 04:23:30
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0805 04:23:30.953562    7967 out.go:291] Setting OutFile to fd 1 ...
I0805 04:23:30.953684    7967 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:23:30.953685    7967 out.go:304] Setting ErrFile to fd 2...
I0805 04:23:30.953687    7967 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:23:30.953819    7967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:23:30.954876    7967 out.go:298] Setting JSON to false
I0805 04:23:30.970849    7967 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4980,"bootTime":1722852030,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0805 04:23:30.970915    7967 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0805 04:23:30.977269    7967 out.go:177] * [functional-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0805 04:23:30.986222    7967 out.go:177]   - MINIKUBE_LOCATION=19377
I0805 04:23:30.986271    7967 notify.go:220] Checking for updates...
I0805 04:23:30.995110    7967 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
I0805 04:23:30.998192    7967 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0805 04:23:31.001161    7967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0805 04:23:31.004148    7967 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
I0805 04:23:31.007189    7967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0805 04:23:31.010424    7967 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:23:31.010473    7967 driver.go:392] Setting default libvirt URI to qemu:///system
I0805 04:23:31.015172    7967 out.go:177] * Using the qemu2 driver based on existing profile
I0805 04:23:31.022218    7967 start.go:297] selected driver: qemu2
I0805 04:23:31.022223    7967 start.go:901] validating driver "qemu2" against &{Name:functional-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 04:23:31.022285    7967 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0805 04:23:31.024541    7967 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0805 04:23:31.024561    7967 cni.go:84] Creating CNI manager for ""
I0805 04:23:31.024567    7967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 04:23:31.024613    7967 start.go:340] cluster config:
{Name:functional-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-814000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 04:23:31.028055    7967 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 04:23:31.036163    7967 out.go:177] * Starting "functional-814000" primary control-plane node in "functional-814000" cluster
I0805 04:23:31.039142    7967 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0805 04:23:31.039155    7967 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0805 04:23:31.039167    7967 cache.go:56] Caching tarball of preloaded images
I0805 04:23:31.039223    7967 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0805 04:23:31.039228    7967 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0805 04:23:31.039287    7967 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/functional-814000/config.json ...
I0805 04:23:31.039774    7967 start.go:360] acquireMachinesLock for functional-814000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 04:23:31.039808    7967 start.go:364] duration metric: took 30.417µs to acquireMachinesLock for "functional-814000"
I0805 04:23:31.039815    7967 start.go:96] Skipping create...Using existing machine configuration
I0805 04:23:31.039819    7967 fix.go:54] fixHost starting: 
I0805 04:23:31.039938    7967 fix.go:112] recreateIfNeeded on functional-814000: state=Stopped err=<nil>
W0805 04:23:31.039944    7967 fix.go:138] unexpected machine state, will restart: <nil>
I0805 04:23:31.048057    7967 out.go:177] * Restarting existing qemu2 VM for "functional-814000" ...
I0805 04:23:31.052142    7967 qemu.go:418] Using hvf for hardware acceleration
I0805 04:23:31.052178    7967 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9c:c1:45:4c:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/disk.qcow2
I0805 04:23:31.054229    7967 main.go:141] libmachine: STDOUT: 
I0805 04:23:31.054247    7967 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0805 04:23:31.054275    7967 fix.go:56] duration metric: took 14.456416ms for fixHost
I0805 04:23:31.054278    7967 start.go:83] releasing machines lock for "functional-814000", held for 14.466625ms
W0805 04:23:31.054284    7967 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0805 04:23:31.054319    7967 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0805 04:23:31.054324    7967 start.go:729] Will try again in 5 seconds ...
I0805 04:23:36.056483    7967 start.go:360] acquireMachinesLock for functional-814000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 04:23:36.056819    7967 start.go:364] duration metric: took 300.709µs to acquireMachinesLock for "functional-814000"
I0805 04:23:36.056948    7967 start.go:96] Skipping create...Using existing machine configuration
I0805 04:23:36.056960    7967 fix.go:54] fixHost starting: 
I0805 04:23:36.057644    7967 fix.go:112] recreateIfNeeded on functional-814000: state=Stopped err=<nil>
W0805 04:23:36.057664    7967 fix.go:138] unexpected machine state, will restart: <nil>
I0805 04:23:36.065985    7967 out.go:177] * Restarting existing qemu2 VM for "functional-814000" ...
I0805 04:23:36.069021    7967 qemu.go:418] Using hvf for hardware acceleration
I0805 04:23:36.069161    7967 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9c:c1:45:4c:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/functional-814000/disk.qcow2
I0805 04:23:36.078211    7967 main.go:141] libmachine: STDOUT: 
I0805 04:23:36.078274    7967 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0805 04:23:36.078369    7967 fix.go:56] duration metric: took 21.40775ms for fixHost
I0805 04:23:36.078382    7967 start.go:83] releasing machines lock for "functional-814000", held for 21.547125ms
W0805 04:23:36.078579    7967 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-814000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0805 04:23:36.086042    7967 out.go:177] 
W0805 04:23:36.090089    7967 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0805 04:23:36.090120    7967 out.go:239] * 
W0805 04:23:36.092675    7967 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0805 04:23:36.099478    7967 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-814000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-814000 apply -f testdata/invalidsvc.yaml: exit status 1 (29.286209ms)

                                                
                                                
** stderr ** 
	error: context "functional-814000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-814000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-814000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-814000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-814000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-814000 --alsologtostderr -v=1] stderr:
I0805 04:24:16.458994    8288 out.go:291] Setting OutFile to fd 1 ...
I0805 04:24:16.459396    8288 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:16.459404    8288 out.go:304] Setting ErrFile to fd 2...
I0805 04:24:16.459407    8288 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:16.459569    8288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:24:16.459840    8288 mustload.go:65] Loading cluster: functional-814000
I0805 04:24:16.460012    8288 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:24:16.464333    8288 out.go:177] * The control-plane node functional-814000 host is not running: state=Stopped
I0805 04:24:16.468331    8288 out.go:177]   To start a cluster, run: "minikube start -p functional-814000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (42.347875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 status: exit status 7 (29.785084ms)

                                                
                                                
-- stdout --
	functional-814000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-814000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.250584ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-814000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 status -o json: exit status 7 (29.078166ms)

                                                
                                                
-- stdout --
	{"Name":"functional-814000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-814000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (28.964708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-814000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-814000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.209333ms)

                                                
                                                
** stderr ** 
	error: context "functional-814000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-814000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-814000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-814000 describe po hello-node-connect: exit status 1 (26.329125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-814000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-814000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-814000 logs -l app=hello-node-connect: exit status 1 (25.714333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-814000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-814000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-814000 describe svc hello-node-connect: exit status 1 (25.206208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-814000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (28.179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-814000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (29.803375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "echo hello": exit status 83 (40.611792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-814000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"\n"*. args "out/minikube-darwin-arm64 -p functional-814000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "cat /etc/hostname": exit status 83 (37.237292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-814000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-814000"- but got *"* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"\n"*. args "out/minikube-darwin-arm64 -p functional-814000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (30.102209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (52.646625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-814000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh -n functional-814000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh -n functional-814000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.763708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-814000 ssh -n functional-814000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-814000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-814000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 cp functional-814000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2321701454/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 cp functional-814000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2321701454/001/cp-test.txt: exit status 83 (38.634875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-814000 cp functional-814000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2321701454/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh -n functional-814000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh -n functional-814000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.884833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-814000 ssh -n functional-814000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2321701454/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (44.669541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-814000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh -n functional-814000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh -n functional-814000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (38.939125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-814000 ssh -n functional-814000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-814000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-814000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7624/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/test/nested/copy/7624/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/test/nested/copy/7624/hosts": exit status 83 (40.314375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/test/nested/copy/7624/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-814000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-814000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (30.021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7624.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/ssl/certs/7624.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/ssl/certs/7624.pem": exit status 83 (41.092583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7624.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-814000 ssh \"sudo cat /etc/ssl/certs/7624.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7624.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-814000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-814000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7624.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /usr/share/ca-certificates/7624.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /usr/share/ca-certificates/7624.pem": exit status 83 (57.673875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7624.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-814000 ssh \"sudo cat /usr/share/ca-certificates/7624.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7624.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-814000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-814000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (40.493875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-814000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-814000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-814000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/76242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/ssl/certs/76242.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/ssl/certs/76242.pem": exit status 83 (36.731459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/76242.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-814000 ssh \"sudo cat /etc/ssl/certs/76242.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/76242.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-814000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-814000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/76242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /usr/share/ca-certificates/76242.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /usr/share/ca-certificates/76242.pem": exit status 83 (38.552291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/76242.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-814000 ssh \"sudo cat /usr/share/ca-certificates/76242.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/76242.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-814000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-814000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (44.523667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-814000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-814000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-814000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (29.036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-814000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-814000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.123542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-814000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-814000 -n functional-814000: exit status 7 (31.571584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo systemctl is-active crio": exit status 83 (44.418209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 version -o=json --components: exit status 83 (40.7475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-814000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-814000 image ls --format short --alsologtostderr:
I0805 04:24:16.895866    8305 out.go:291] Setting OutFile to fd 1 ...
I0805 04:24:16.896020    8305 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:16.896023    8305 out.go:304] Setting ErrFile to fd 2...
I0805 04:24:16.896025    8305 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:16.896156    8305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:24:16.896547    8305 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:24:16.896615    8305 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-814000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-814000 image ls --format table --alsologtostderr:
I0805 04:24:16.968207    8309 out.go:291] Setting OutFile to fd 1 ...
I0805 04:24:16.968358    8309 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:16.968361    8309 out.go:304] Setting ErrFile to fd 2...
I0805 04:24:16.968363    8309 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:16.968506    8309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:24:16.968905    8309 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:24:16.968964    8309 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-814000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-814000 image ls --format json --alsologtostderr:
I0805 04:24:16.931780    8307 out.go:291] Setting OutFile to fd 1 ...
I0805 04:24:16.931940    8307 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:16.931943    8307 out.go:304] Setting ErrFile to fd 2...
I0805 04:24:16.931945    8307 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:16.932092    8307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:24:16.932504    8307 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:24:16.932564    8307 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-814000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-814000 image ls --format yaml --alsologtostderr:
I0805 04:24:16.861545    8303 out.go:291] Setting OutFile to fd 1 ...
I0805 04:24:16.861695    8303 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:16.861698    8303 out.go:304] Setting ErrFile to fd 2...
I0805 04:24:16.861700    8303 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:16.861831    8303 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:24:16.862266    8303 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:24:16.862324    8303 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh pgrep buildkitd: exit status 83 (40.255375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image build -t localhost/my-image:functional-814000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-814000 image build -t localhost/my-image:functional-814000 testdata/build --alsologtostderr:
I0805 04:24:17.041756    8313 out.go:291] Setting OutFile to fd 1 ...
I0805 04:24:17.042129    8313 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:17.042132    8313 out.go:304] Setting ErrFile to fd 2...
I0805 04:24:17.042134    8313 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:24:17.042254    8313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:24:17.042624    8313 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:24:17.043038    8313 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:24:17.043273    8313 build_images.go:133] succeeded building to: 
I0805 04:24:17.043277    8313 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image ls
functional_test.go:442: expected "localhost/my-image:functional-814000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-814000 docker-env) && out/minikube-darwin-arm64 status -p functional-814000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-814000 docker-env) && out/minikube-darwin-arm64 status -p functional-814000": exit status 1 (44.691708ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 update-context --alsologtostderr -v=2: exit status 83 (42.9095ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:24:16.735243    8297 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:24:16.736143    8297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:24:16.736147    8297 out.go:304] Setting ErrFile to fd 2...
	I0805 04:24:16.736149    8297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:24:16.736317    8297 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:24:16.736537    8297 mustload.go:65] Loading cluster: functional-814000
	I0805 04:24:16.736723    8297 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:24:16.741499    8297 out.go:177] * The control-plane node functional-814000 host is not running: state=Stopped
	I0805 04:24:16.745399    8297 out.go:177]   To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-814000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 update-context --alsologtostderr -v=2: exit status 83 (40.142875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:24:16.819514    8301 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:24:16.819648    8301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:24:16.819651    8301 out.go:304] Setting ErrFile to fd 2...
	I0805 04:24:16.819653    8301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:24:16.819809    8301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:24:16.820040    8301 mustload.go:65] Loading cluster: functional-814000
	I0805 04:24:16.820246    8301 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:24:16.823614    8301 out.go:177] * The control-plane node functional-814000 host is not running: state=Stopped
	I0805 04:24:16.827420    8301 out.go:177]   To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-814000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 update-context --alsologtostderr -v=2: exit status 83 (40.121333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:24:16.779618    8299 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:24:16.779806    8299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:24:16.779810    8299 out.go:304] Setting ErrFile to fd 2...
	I0805 04:24:16.779812    8299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:24:16.779945    8299 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:24:16.780173    8299 mustload.go:65] Loading cluster: functional-814000
	I0805 04:24:16.780363    8299 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:24:16.783421    8299 out.go:177] * The control-plane node functional-814000 host is not running: state=Stopped
	I0805 04:24:16.787434    8299 out.go:177]   To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-814000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-814000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-814000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.401583ms)

                                                
                                                
** stderr ** 
	error: context "functional-814000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-814000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 service list: exit status 83 (44.00225ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-814000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 service list -o json: exit status 83 (40.77525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-814000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 service --namespace=default --https --url hello-node: exit status 83 (43.752583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-814000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 service hello-node --url --format={{.IP}}: exit status 83 (41.904084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-814000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 service hello-node --url: exit status 83 (46.713459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-814000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test.go:1565: failed to parse "* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"": parse "* The control-plane node functional-814000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-814000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-814000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-814000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0805 04:23:37.835635    8087 out.go:291] Setting OutFile to fd 1 ...
I0805 04:23:37.835786    8087 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:23:37.835790    8087 out.go:304] Setting ErrFile to fd 2...
I0805 04:23:37.835792    8087 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:23:37.835915    8087 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:23:37.836161    8087 mustload.go:65] Loading cluster: functional-814000
I0805 04:23:37.836369    8087 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:23:37.841389    8087 out.go:177] * The control-plane node functional-814000 host is not running: state=Stopped
I0805 04:23:37.853370    8087 out.go:177]   To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
stdout: * The control-plane node functional-814000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-814000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-814000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 8086: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-814000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-814000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-814000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-814000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-814000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-814000": client config: context "functional-814000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (112.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-814000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-814000 get svc nginx-svc: exit status 1 (73.882166ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-814000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-814000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (112.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image load --daemon docker.io/kicbase/echo-server:functional-814000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-814000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image load --daemon docker.io/kicbase/echo-server:functional-814000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-814000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-814000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image load --daemon docker.io/kicbase/echo-server:functional-814000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-814000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image save docker.io/kicbase/echo-server:functional-814000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-814000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035530541s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-979000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-979000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.719124833s)

                                                
                                                
-- stdout --
	* [ha-979000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-979000" primary control-plane node in "ha-979000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-979000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:26:34.146896    8376 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:26:34.147007    8376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:26:34.147010    8376 out.go:304] Setting ErrFile to fd 2...
	I0805 04:26:34.147012    8376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:26:34.147134    8376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:26:34.148226    8376 out.go:298] Setting JSON to false
	I0805 04:26:34.164379    8376 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5164,"bootTime":1722852030,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:26:34.164446    8376 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:26:34.170250    8376 out.go:177] * [ha-979000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:26:34.177178    8376 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:26:34.177260    8376 notify.go:220] Checking for updates...
	I0805 04:26:34.184214    8376 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:26:34.187189    8376 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:26:34.190187    8376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:26:34.193222    8376 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:26:34.196159    8376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:26:34.199299    8376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:26:34.203189    8376 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:26:34.210152    8376 start.go:297] selected driver: qemu2
	I0805 04:26:34.210157    8376 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:26:34.210162    8376 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:26:34.212564    8376 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:26:34.215205    8376 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:26:34.216526    8376 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:26:34.216557    8376 cni.go:84] Creating CNI manager for ""
	I0805 04:26:34.216562    8376 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 04:26:34.216567    8376 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 04:26:34.216596    8376 start.go:340] cluster config:
	{Name:ha-979000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-979000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:26:34.220368    8376 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:26:34.228225    8376 out.go:177] * Starting "ha-979000" primary control-plane node in "ha-979000" cluster
	I0805 04:26:34.232149    8376 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:26:34.232165    8376 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:26:34.232177    8376 cache.go:56] Caching tarball of preloaded images
	I0805 04:26:34.232233    8376 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:26:34.232238    8376 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:26:34.232430    8376 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/ha-979000/config.json ...
	I0805 04:26:34.232441    8376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/ha-979000/config.json: {Name:mk2ebdfe593e23c401f5214f16105ead2ebfcabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:26:34.232808    8376 start.go:360] acquireMachinesLock for ha-979000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:26:34.232843    8376 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "ha-979000"
	I0805 04:26:34.232853    8376 start.go:93] Provisioning new machine with config: &{Name:ha-979000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-979000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:26:34.232883    8376 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:26:34.241180    8376 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:26:34.259121    8376 start.go:159] libmachine.API.Create for "ha-979000" (driver="qemu2")
	I0805 04:26:34.259156    8376 client.go:168] LocalClient.Create starting
	I0805 04:26:34.259221    8376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:26:34.259254    8376 main.go:141] libmachine: Decoding PEM data...
	I0805 04:26:34.259263    8376 main.go:141] libmachine: Parsing certificate...
	I0805 04:26:34.259298    8376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:26:34.259320    8376 main.go:141] libmachine: Decoding PEM data...
	I0805 04:26:34.259328    8376 main.go:141] libmachine: Parsing certificate...
	I0805 04:26:34.259807    8376 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:26:34.405149    8376 main.go:141] libmachine: Creating SSH key...
	I0805 04:26:34.447630    8376 main.go:141] libmachine: Creating Disk image...
	I0805 04:26:34.447636    8376 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:26:34.447805    8376 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2
	I0805 04:26:34.456851    8376 main.go:141] libmachine: STDOUT: 
	I0805 04:26:34.456868    8376 main.go:141] libmachine: STDERR: 
	I0805 04:26:34.456921    8376 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2 +20000M
	I0805 04:26:34.464701    8376 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:26:34.464715    8376 main.go:141] libmachine: STDERR: 
	I0805 04:26:34.464727    8376 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2
	I0805 04:26:34.464731    8376 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:26:34.464741    8376 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:26:34.464779    8376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:cf:ed:03:da:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2
	I0805 04:26:34.466402    8376 main.go:141] libmachine: STDOUT: 
	I0805 04:26:34.466416    8376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:26:34.466435    8376 client.go:171] duration metric: took 207.2745ms to LocalClient.Create
	I0805 04:26:36.468593    8376 start.go:128] duration metric: took 2.235708708s to createHost
	I0805 04:26:36.468670    8376 start.go:83] releasing machines lock for "ha-979000", held for 2.235834417s
	W0805 04:26:36.468832    8376 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:26:36.478979    8376 out.go:177] * Deleting "ha-979000" in qemu2 ...
	W0805 04:26:36.504252    8376 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:26:36.504281    8376 start.go:729] Will try again in 5 seconds ...
	I0805 04:26:41.506430    8376 start.go:360] acquireMachinesLock for ha-979000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:26:41.506929    8376 start.go:364] duration metric: took 406.375µs to acquireMachinesLock for "ha-979000"
	I0805 04:26:41.507064    8376 start.go:93] Provisioning new machine with config: &{Name:ha-979000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-979000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:26:41.507352    8376 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:26:41.512154    8376 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:26:41.560507    8376 start.go:159] libmachine.API.Create for "ha-979000" (driver="qemu2")
	I0805 04:26:41.560550    8376 client.go:168] LocalClient.Create starting
	I0805 04:26:41.560672    8376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:26:41.560737    8376 main.go:141] libmachine: Decoding PEM data...
	I0805 04:26:41.560762    8376 main.go:141] libmachine: Parsing certificate...
	I0805 04:26:41.560816    8376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:26:41.560860    8376 main.go:141] libmachine: Decoding PEM data...
	I0805 04:26:41.560876    8376 main.go:141] libmachine: Parsing certificate...
	I0805 04:26:41.561620    8376 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:26:41.716297    8376 main.go:141] libmachine: Creating SSH key...
	I0805 04:26:41.776077    8376 main.go:141] libmachine: Creating Disk image...
	I0805 04:26:41.776084    8376 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:26:41.776272    8376 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2
	I0805 04:26:41.785503    8376 main.go:141] libmachine: STDOUT: 
	I0805 04:26:41.785519    8376 main.go:141] libmachine: STDERR: 
	I0805 04:26:41.785563    8376 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2 +20000M
	I0805 04:26:41.793302    8376 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:26:41.793316    8376 main.go:141] libmachine: STDERR: 
	I0805 04:26:41.793328    8376 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2
	I0805 04:26:41.793334    8376 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:26:41.793344    8376 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:26:41.793375    8376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d8:17:f6:a7:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2
	I0805 04:26:41.794947    8376 main.go:141] libmachine: STDOUT: 
	I0805 04:26:41.794963    8376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:26:41.794976    8376 client.go:171] duration metric: took 234.420875ms to LocalClient.Create
	I0805 04:26:43.797139    8376 start.go:128] duration metric: took 2.289769542s to createHost
	I0805 04:26:43.797206    8376 start.go:83] releasing machines lock for "ha-979000", held for 2.290270875s
	W0805 04:26:43.797547    8376 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-979000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-979000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:26:43.808029    8376 out.go:177] 
	W0805 04:26:43.814211    8376 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:26:43.814233    8376 out.go:239] * 
	* 
	W0805 04:26:43.817124    8376 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:26:43.824037    8376 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-979000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (64.458875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (73.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.521708ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-979000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- rollout status deployment/busybox: exit status 1 (55.945041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.352291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.564125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.40475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.622792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.016958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.399541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.875875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.566375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.395709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.792875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.926708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.087667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.914792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.756ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (29.590875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (73.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-979000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.056834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-979000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (29.796958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-979000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-979000 -v=7 --alsologtostderr: exit status 83 (40.630667ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-979000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-979000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:27:57.055154    8472 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:27:57.055881    8472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.055885    8472 out.go:304] Setting ErrFile to fd 2...
	I0805 04:27:57.055887    8472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.056044    8472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:27:57.056278    8472 mustload.go:65] Loading cluster: ha-979000
	I0805 04:27:57.056458    8472 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:27:57.060580    8472 out.go:177] * The control-plane node ha-979000 host is not running: state=Stopped
	I0805 04:27:57.063500    8472 out.go:177]   To start a cluster, run: "minikube start -p ha-979000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-979000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (29.849458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-979000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-979000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.726791ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-979000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-979000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-979000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (30.06625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-979000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-979000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-979000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-979000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-979000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-979000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-979000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-979000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (28.949125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status --output json -v=7 --alsologtostderr: exit status 7 (29.778458ms)

                                                
                                                
-- stdout --
	{"Name":"ha-979000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:27:57.258751    8484 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:27:57.258887    8484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.258890    8484 out.go:304] Setting ErrFile to fd 2...
	I0805 04:27:57.258892    8484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.259011    8484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:27:57.259123    8484 out.go:298] Setting JSON to true
	I0805 04:27:57.259132    8484 mustload.go:65] Loading cluster: ha-979000
	I0805 04:27:57.259198    8484 notify.go:220] Checking for updates...
	I0805 04:27:57.259351    8484 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:27:57.259358    8484 status.go:255] checking status of ha-979000 ...
	I0805 04:27:57.259576    8484 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:27:57.259579    8484 status.go:343] host is not running, skipping remaining checks
	I0805 04:27:57.259582    8484 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-979000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (29.603583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.43225ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:27:57.318449    8488 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:27:57.318847    8488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.318851    8488 out.go:304] Setting ErrFile to fd 2...
	I0805 04:27:57.318854    8488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.319048    8488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:27:57.319305    8488 mustload.go:65] Loading cluster: ha-979000
	I0805 04:27:57.319503    8488 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:27:57.323380    8488 out.go:177] 
	W0805 04:27:57.326352    8488 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0805 04:27:57.326356    8488 out.go:239] * 
	* 
	W0805 04:27:57.328476    8488 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:27:57.333255    8488 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-979000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (29.698667ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:27:57.366186    8490 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:27:57.366386    8490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.366389    8490 out.go:304] Setting ErrFile to fd 2...
	I0805 04:27:57.366391    8490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.366518    8490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:27:57.366640    8490 out.go:298] Setting JSON to false
	I0805 04:27:57.366648    8490 mustload.go:65] Loading cluster: ha-979000
	I0805 04:27:57.366715    8490 notify.go:220] Checking for updates...
	I0805 04:27:57.366851    8490 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:27:57.366857    8490 status.go:255] checking status of ha-979000 ...
	I0805 04:27:57.367074    8490 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:27:57.367078    8490 status.go:343] host is not running, skipping remaining checks
	I0805 04:27:57.367080    8490 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr": ha-979000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr": ha-979000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr": ha-979000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr": ha-979000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (30.082416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-979000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-979000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-979000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-979000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (35.003333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (56.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.424542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:27:57.508951    8499 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:27:57.509424    8499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.509428    8499 out.go:304] Setting ErrFile to fd 2...
	I0805 04:27:57.509430    8499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.509609    8499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:27:57.509822    8499 mustload.go:65] Loading cluster: ha-979000
	I0805 04:27:57.510013    8499 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:27:57.513337    8499 out.go:177] 
	W0805 04:27:57.517302    8499 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0805 04:27:57.517307    8499 out.go:239] * 
	* 
	W0805 04:27:57.519274    8499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:27:57.523308    8499 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0805 04:27:57.508951    8499 out.go:291] Setting OutFile to fd 1 ...
I0805 04:27:57.509424    8499 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:27:57.509428    8499 out.go:304] Setting ErrFile to fd 2...
I0805 04:27:57.509430    8499 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:27:57.509609    8499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:27:57.509822    8499 mustload.go:65] Loading cluster: ha-979000
I0805 04:27:57.510013    8499 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:27:57.513337    8499 out.go:177] 
W0805 04:27:57.517302    8499 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0805 04:27:57.517307    8499 out.go:239] * 
* 
W0805 04:27:57.519274    8499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0805 04:27:57.523308    8499 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-979000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (30.468917ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:27:57.557130    8501 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:27:57.557266    8501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.557269    8501 out.go:304] Setting ErrFile to fd 2...
	I0805 04:27:57.557271    8501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:57.557416    8501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:27:57.557529    8501 out.go:298] Setting JSON to false
	I0805 04:27:57.557537    8501 mustload.go:65] Loading cluster: ha-979000
	I0805 04:27:57.557604    8501 notify.go:220] Checking for updates...
	I0805 04:27:57.557763    8501 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:27:57.557770    8501 status.go:255] checking status of ha-979000 ...
	I0805 04:27:57.557974    8501 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:27:57.557977    8501 status.go:343] host is not running, skipping remaining checks
	I0805 04:27:57.557980    8501 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (71.611333ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:27:58.431788    8505 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:27:58.432285    8505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:58.432304    8505 out.go:304] Setting ErrFile to fd 2...
	I0805 04:27:58.432311    8505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:27:58.432842    8505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:27:58.433031    8505 out.go:298] Setting JSON to false
	I0805 04:27:58.433048    8505 mustload.go:65] Loading cluster: ha-979000
	I0805 04:27:58.433084    8505 notify.go:220] Checking for updates...
	I0805 04:27:58.433309    8505 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:27:58.433318    8505 status.go:255] checking status of ha-979000 ...
	I0805 04:27:58.433593    8505 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:27:58.433598    8505 status.go:343] host is not running, skipping remaining checks
	I0805 04:27:58.433602    8505 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (74.124041ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:28:00.248506    8507 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:28:00.248717    8507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:00.248722    8507 out.go:304] Setting ErrFile to fd 2...
	I0805 04:28:00.248725    8507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:00.248913    8507 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:28:00.249072    8507 out.go:298] Setting JSON to false
	I0805 04:28:00.249084    8507 mustload.go:65] Loading cluster: ha-979000
	I0805 04:28:00.249127    8507 notify.go:220] Checking for updates...
	I0805 04:28:00.249349    8507 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:28:00.249360    8507 status.go:255] checking status of ha-979000 ...
	I0805 04:28:00.249668    8507 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:28:00.249673    8507 status.go:343] host is not running, skipping remaining checks
	I0805 04:28:00.249676    8507 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (74.473667ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:28:02.071308    8509 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:28:02.071520    8509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:02.071525    8509 out.go:304] Setting ErrFile to fd 2...
	I0805 04:28:02.071528    8509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:02.071693    8509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:28:02.071826    8509 out.go:298] Setting JSON to false
	I0805 04:28:02.071838    8509 mustload.go:65] Loading cluster: ha-979000
	I0805 04:28:02.071882    8509 notify.go:220] Checking for updates...
	I0805 04:28:02.072074    8509 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:28:02.072083    8509 status.go:255] checking status of ha-979000 ...
	I0805 04:28:02.072374    8509 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:28:02.072379    8509 status.go:343] host is not running, skipping remaining checks
	I0805 04:28:02.072382    8509 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (74.241875ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:28:06.953358    8511 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:28:06.953581    8511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:06.953585    8511 out.go:304] Setting ErrFile to fd 2...
	I0805 04:28:06.953588    8511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:06.953777    8511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:28:06.953927    8511 out.go:298] Setting JSON to false
	I0805 04:28:06.953939    8511 mustload.go:65] Loading cluster: ha-979000
	I0805 04:28:06.953977    8511 notify.go:220] Checking for updates...
	I0805 04:28:06.954207    8511 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:28:06.954216    8511 status.go:255] checking status of ha-979000 ...
	I0805 04:28:06.954485    8511 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:28:06.954490    8511 status.go:343] host is not running, skipping remaining checks
	I0805 04:28:06.954493    8511 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (74.751542ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:28:13.129570    8514 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:28:13.129803    8514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:13.129813    8514 out.go:304] Setting ErrFile to fd 2...
	I0805 04:28:13.129817    8514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:13.130037    8514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:28:13.130250    8514 out.go:298] Setting JSON to false
	I0805 04:28:13.130270    8514 mustload.go:65] Loading cluster: ha-979000
	I0805 04:28:13.130312    8514 notify.go:220] Checking for updates...
	I0805 04:28:13.130561    8514 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:28:13.130572    8514 status.go:255] checking status of ha-979000 ...
	I0805 04:28:13.130867    8514 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:28:13.130872    8514 status.go:343] host is not running, skipping remaining checks
	I0805 04:28:13.130875    8514 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (76.026667ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:28:19.789088    8516 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:28:19.789501    8516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:19.789508    8516 out.go:304] Setting ErrFile to fd 2...
	I0805 04:28:19.789511    8516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:19.789754    8516 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:28:19.789949    8516 out.go:298] Setting JSON to false
	I0805 04:28:19.789959    8516 mustload.go:65] Loading cluster: ha-979000
	I0805 04:28:19.790177    8516 notify.go:220] Checking for updates...
	I0805 04:28:19.790561    8516 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:28:19.790583    8516 status.go:255] checking status of ha-979000 ...
	I0805 04:28:19.790851    8516 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:28:19.790856    8516 status.go:343] host is not running, skipping remaining checks
	I0805 04:28:19.790860    8516 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (73.078917ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:28:35.966309    8520 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:28:35.966523    8520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:35.966528    8520 out.go:304] Setting ErrFile to fd 2...
	I0805 04:28:35.966531    8520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:35.966717    8520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:28:35.966909    8520 out.go:298] Setting JSON to false
	I0805 04:28:35.966923    8520 mustload.go:65] Loading cluster: ha-979000
	I0805 04:28:35.966973    8520 notify.go:220] Checking for updates...
	I0805 04:28:35.967185    8520 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:28:35.967193    8520 status.go:255] checking status of ha-979000 ...
	I0805 04:28:35.967462    8520 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:28:35.967467    8520 status.go:343] host is not running, skipping remaining checks
	I0805 04:28:35.967470    8520 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (76.719167ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:28:53.497509    8530 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:28:53.497723    8530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:53.497728    8530 out.go:304] Setting ErrFile to fd 2...
	I0805 04:28:53.497732    8530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:53.497943    8530 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:28:53.498100    8530 out.go:298] Setting JSON to false
	I0805 04:28:53.498112    8530 mustload.go:65] Loading cluster: ha-979000
	I0805 04:28:53.498166    8530 notify.go:220] Checking for updates...
	I0805 04:28:53.498411    8530 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:28:53.498422    8530 status.go:255] checking status of ha-979000 ...
	I0805 04:28:53.498728    8530 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:28:53.498733    8530 status.go:343] host is not running, skipping remaining checks
	I0805 04:28:53.498737    8530 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (33.142833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (56.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-979000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-979000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-979000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-979000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-979000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-979000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-979000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-979000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (28.782083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-979000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-979000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-979000 -v=7 --alsologtostderr: (1.899505542s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-979000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-979000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.219368875s)

                                                
                                                
-- stdout --
	* [ha-979000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-979000" primary control-plane node in "ha-979000" cluster
	* Restarting existing qemu2 VM for "ha-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:28:55.602849    8551 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:28:55.603003    8551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:55.603007    8551 out.go:304] Setting ErrFile to fd 2...
	I0805 04:28:55.603011    8551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:28:55.603185    8551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:28:55.604430    8551 out.go:298] Setting JSON to false
	I0805 04:28:55.623943    8551 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5305,"bootTime":1722852030,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:28:55.624027    8551 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:28:55.628424    8551 out.go:177] * [ha-979000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:28:55.634394    8551 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:28:55.634422    8551 notify.go:220] Checking for updates...
	I0805 04:28:55.641417    8551 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:28:55.644384    8551 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:28:55.647344    8551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:28:55.650349    8551 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:28:55.653374    8551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:28:55.656601    8551 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:28:55.656654    8551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:28:55.661370    8551 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:28:55.668353    8551 start.go:297] selected driver: qemu2
	I0805 04:28:55.668360    8551 start.go:901] validating driver "qemu2" against &{Name:ha-979000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-979000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:28:55.668436    8551 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:28:55.670885    8551 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:28:55.670921    8551 cni.go:84] Creating CNI manager for ""
	I0805 04:28:55.670926    8551 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 04:28:55.670965    8551 start.go:340] cluster config:
	{Name:ha-979000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-979000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:28:55.674666    8551 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:28:55.682369    8551 out.go:177] * Starting "ha-979000" primary control-plane node in "ha-979000" cluster
	I0805 04:28:55.686399    8551 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:28:55.686416    8551 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:28:55.686433    8551 cache.go:56] Caching tarball of preloaded images
	I0805 04:28:55.686498    8551 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:28:55.686504    8551 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:28:55.686566    8551 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/ha-979000/config.json ...
	I0805 04:28:55.687037    8551 start.go:360] acquireMachinesLock for ha-979000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:28:55.687075    8551 start.go:364] duration metric: took 30.917µs to acquireMachinesLock for "ha-979000"
	I0805 04:28:55.687084    8551 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:28:55.687090    8551 fix.go:54] fixHost starting: 
	I0805 04:28:55.687218    8551 fix.go:112] recreateIfNeeded on ha-979000: state=Stopped err=<nil>
	W0805 04:28:55.687226    8551 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:28:55.694374    8551 out.go:177] * Restarting existing qemu2 VM for "ha-979000" ...
	I0805 04:28:55.698417    8551 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:28:55.698464    8551 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d8:17:f6:a7:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2
	I0805 04:28:55.700738    8551 main.go:141] libmachine: STDOUT: 
	I0805 04:28:55.700759    8551 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:28:55.700790    8551 fix.go:56] duration metric: took 13.700875ms for fixHost
	I0805 04:28:55.700795    8551 start.go:83] releasing machines lock for "ha-979000", held for 13.715375ms
	W0805 04:28:55.700802    8551 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:28:55.700849    8551 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:28:55.700855    8551 start.go:729] Will try again in 5 seconds ...
	I0805 04:29:00.702963    8551 start.go:360] acquireMachinesLock for ha-979000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:29:00.703363    8551 start.go:364] duration metric: took 317.5µs to acquireMachinesLock for "ha-979000"
	I0805 04:29:00.703481    8551 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:29:00.703501    8551 fix.go:54] fixHost starting: 
	I0805 04:29:00.704178    8551 fix.go:112] recreateIfNeeded on ha-979000: state=Stopped err=<nil>
	W0805 04:29:00.704204    8551 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:29:00.708576    8551 out.go:177] * Restarting existing qemu2 VM for "ha-979000" ...
	I0805 04:29:00.712525    8551 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:29:00.712729    8551 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d8:17:f6:a7:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2
	I0805 04:29:00.722074    8551 main.go:141] libmachine: STDOUT: 
	I0805 04:29:00.722142    8551 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:29:00.722221    8551 fix.go:56] duration metric: took 18.719084ms for fixHost
	I0805 04:29:00.722234    8551 start.go:83] releasing machines lock for "ha-979000", held for 18.849541ms
	W0805 04:29:00.722407    8551 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-979000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-979000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:29:00.729588    8551 out.go:177] 
	W0805 04:29:00.733622    8551 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:29:00.733648    8551 out.go:239] * 
	* 
	W0805 04:29:00.736298    8551 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:29:00.743559    8551 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-979000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-979000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (31.495584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.055166ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-979000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-979000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:29:00.887379    8563 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:29:00.887784    8563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:00.887790    8563 out.go:304] Setting ErrFile to fd 2...
	I0805 04:29:00.887793    8563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:00.887932    8563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:29:00.888145    8563 mustload.go:65] Loading cluster: ha-979000
	I0805 04:29:00.888334    8563 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:29:00.892580    8563 out.go:177] * The control-plane node ha-979000 host is not running: state=Stopped
	I0805 04:29:00.895569    8563 out.go:177]   To start a cluster, run: "minikube start -p ha-979000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-979000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (29.536792ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:29:00.927515    8565 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:29:00.927669    8565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:00.927672    8565 out.go:304] Setting ErrFile to fd 2...
	I0805 04:29:00.927674    8565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:00.927789    8565 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:29:00.927899    8565 out.go:298] Setting JSON to false
	I0805 04:29:00.927914    8565 mustload.go:65] Loading cluster: ha-979000
	I0805 04:29:00.927970    8565 notify.go:220] Checking for updates...
	I0805 04:29:00.928111    8565 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:29:00.928118    8565 status.go:255] checking status of ha-979000 ...
	I0805 04:29:00.928303    8565 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:29:00.928307    8565 status.go:343] host is not running, skipping remaining checks
	I0805 04:29:00.928309    8565 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (29.605166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-979000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-979000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-979000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-979000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (29.044958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-979000 stop -v=7 --alsologtostderr: (3.45052325s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr: exit status 7 (66.683584ms)

                                                
                                                
-- stdout --
	ha-979000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:29:04.550280    8594 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:29:04.550461    8594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:04.550465    8594 out.go:304] Setting ErrFile to fd 2...
	I0805 04:29:04.550468    8594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:04.550631    8594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:29:04.550790    8594 out.go:298] Setting JSON to false
	I0805 04:29:04.550808    8594 mustload.go:65] Loading cluster: ha-979000
	I0805 04:29:04.550856    8594 notify.go:220] Checking for updates...
	I0805 04:29:04.551083    8594 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:29:04.551091    8594 status.go:255] checking status of ha-979000 ...
	I0805 04:29:04.551381    8594 status.go:330] ha-979000 host status = "Stopped" (err=<nil>)
	I0805 04:29:04.551386    8594 status.go:343] host is not running, skipping remaining checks
	I0805 04:29:04.551389    8594 status.go:257] ha-979000 status: &{Name:ha-979000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr": ha-979000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr": ha-979000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-979000 status -v=7 --alsologtostderr": ha-979000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (32.315666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-979000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-979000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.191342s)

                                                
                                                
-- stdout --
	* [ha-979000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-979000" primary control-plane node in "ha-979000" cluster
	* Restarting existing qemu2 VM for "ha-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:29:04.611910    8598 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:29:04.612046    8598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:04.612049    8598 out.go:304] Setting ErrFile to fd 2...
	I0805 04:29:04.612052    8598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:04.612177    8598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:29:04.613282    8598 out.go:298] Setting JSON to false
	I0805 04:29:04.629271    8598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5314,"bootTime":1722852030,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:29:04.629350    8598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:29:04.633307    8598 out.go:177] * [ha-979000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:29:04.641323    8598 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:29:04.641373    8598 notify.go:220] Checking for updates...
	I0805 04:29:04.649267    8598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:29:04.656246    8598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:29:04.660299    8598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:29:04.663255    8598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:29:04.670218    8598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:29:04.673527    8598 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:29:04.673790    8598 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:29:04.678198    8598 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:29:04.685278    8598 start.go:297] selected driver: qemu2
	I0805 04:29:04.685282    8598 start.go:901] validating driver "qemu2" against &{Name:ha-979000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-979000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:29:04.685333    8598 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:29:04.687571    8598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:29:04.687614    8598 cni.go:84] Creating CNI manager for ""
	I0805 04:29:04.687618    8598 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 04:29:04.687661    8598 start.go:340] cluster config:
	{Name:ha-979000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-979000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:29:04.691069    8598 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:29:04.699237    8598 out.go:177] * Starting "ha-979000" primary control-plane node in "ha-979000" cluster
	I0805 04:29:04.703241    8598 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:29:04.703265    8598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:29:04.703276    8598 cache.go:56] Caching tarball of preloaded images
	I0805 04:29:04.703333    8598 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:29:04.703338    8598 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:29:04.703396    8598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/ha-979000/config.json ...
	I0805 04:29:04.703894    8598 start.go:360] acquireMachinesLock for ha-979000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:29:04.703931    8598 start.go:364] duration metric: took 30.666µs to acquireMachinesLock for "ha-979000"
	I0805 04:29:04.703940    8598 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:29:04.703947    8598 fix.go:54] fixHost starting: 
	I0805 04:29:04.704084    8598 fix.go:112] recreateIfNeeded on ha-979000: state=Stopped err=<nil>
	W0805 04:29:04.704093    8598 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:29:04.707268    8598 out.go:177] * Restarting existing qemu2 VM for "ha-979000" ...
	I0805 04:29:04.714229    8598 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:29:04.714266    8598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d8:17:f6:a7:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2
	I0805 04:29:04.716353    8598 main.go:141] libmachine: STDOUT: 
	I0805 04:29:04.716374    8598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:29:04.716405    8598 fix.go:56] duration metric: took 12.459667ms for fixHost
	I0805 04:29:04.716410    8598 start.go:83] releasing machines lock for "ha-979000", held for 12.474375ms
	W0805 04:29:04.716419    8598 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:29:04.716474    8598 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:29:04.716483    8598 start.go:729] Will try again in 5 seconds ...
	I0805 04:29:09.718647    8598 start.go:360] acquireMachinesLock for ha-979000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:29:09.719168    8598 start.go:364] duration metric: took 420.875µs to acquireMachinesLock for "ha-979000"
	I0805 04:29:09.719344    8598 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:29:09.719367    8598 fix.go:54] fixHost starting: 
	I0805 04:29:09.720073    8598 fix.go:112] recreateIfNeeded on ha-979000: state=Stopped err=<nil>
	W0805 04:29:09.720099    8598 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:29:09.724609    8598 out.go:177] * Restarting existing qemu2 VM for "ha-979000" ...
	I0805 04:29:09.732553    8598 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:29:09.732812    8598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d8:17:f6:a7:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/ha-979000/disk.qcow2
	I0805 04:29:09.742130    8598 main.go:141] libmachine: STDOUT: 
	I0805 04:29:09.742192    8598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:29:09.742286    8598 fix.go:56] duration metric: took 22.924208ms for fixHost
	I0805 04:29:09.742302    8598 start.go:83] releasing machines lock for "ha-979000", held for 23.1115ms
	W0805 04:29:09.742482    8598 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-979000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-979000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:29:09.749542    8598 out.go:177] 
	W0805 04:29:09.753582    8598 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:29:09.753632    8598 out.go:239] * 
	* 
	W0805 04:29:09.756305    8598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:29:09.763586    8598 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-979000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (68.151167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-979000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-979000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-979000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-979000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (29.386875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-979000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-979000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.998125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-979000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-979000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:29:09.954227    8614 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:29:09.954360    8614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:09.954363    8614 out.go:304] Setting ErrFile to fd 2...
	I0805 04:29:09.954365    8614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:09.954475    8614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:29:09.954685    8614 mustload.go:65] Loading cluster: ha-979000
	I0805 04:29:09.954861    8614 config.go:182] Loaded profile config "ha-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:29:09.958922    8614 out.go:177] * The control-plane node ha-979000 host is not running: state=Stopped
	I0805 04:29:09.962923    8614 out.go:177]   To start a cluster, run: "minikube start -p ha-979000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-979000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (29.326958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-979000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-979000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-979000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-979000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-979000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-979000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-979000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-979000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-979000 -n ha-979000: exit status 7 (29.464625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-979000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-764000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-764000 --driver=qemu2 : exit status 80 (9.799415041s)

                                                
                                                
-- stdout --
	* [image-764000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-764000" primary control-plane node in "image-764000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-764000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-764000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-764000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-764000 -n image-764000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-764000 -n image-764000: exit status 7 (67.20925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-764000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-928000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-928000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.770745042s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8f0b060b-8250-4c6f-ae79-75611bf3fad4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-928000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"180cfeee-8be6-4627-bf07-a3212d9f5dea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19377"}}
	{"specversion":"1.0","id":"743aa66d-9209-40d7-a99b-faf108a9a451","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig"}}
	{"specversion":"1.0","id":"928ecdbc-6226-4707-bb14-8f9a435d2d2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"40007eba-0974-4bc7-a275-7324d89df541","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aafc3926-9626-46bb-bfc9-8a05b6e908fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube"}}
	{"specversion":"1.0","id":"16bfa062-406b-4018-bbd7-4e64b8ec25ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"71b1bace-fb46-48b9-a6c5-a4fafeb1253b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c93bdbfe-ceaf-4732-bd64-20c43d987d23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"7df4256f-a254-4a7b-a6ea-e1f7b4a46ed6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-928000\" primary control-plane node in \"json-output-928000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"154ed84b-c0ba-47b4-826c-d633e0bc2da3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"dc220fec-c878-4692-b6d4-c9cedb23c316","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-928000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"8033979c-65cc-4e51-a638-a02874f798ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"d34e968c-194e-46ea-b91f-9100233af07d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"603cd51a-551e-4c2c-b540-30a71bd494d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-928000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"9ab26336-c8d0-4649-b3ef-36f850875c20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"f4dab532-2b30-4045-9c7f-856eb89c576a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-928000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-928000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-928000 --output=json --user=testUser: exit status 83 (79.717458ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fc0d7da2-af35-4223-99fd-f617dbacf4a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-928000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"464c66be-17f1-48cc-80ce-67f5cc9fc845","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-928000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-928000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-928000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-928000 --output=json --user=testUser: exit status 83 (45.261208ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-928000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-928000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-928000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-650000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-650000 --driver=qemu2 : exit status 80 (9.88812525s)

                                                
                                                
-- stdout --
	* [first-650000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-650000" primary control-plane node in "first-650000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-650000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-650000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-650000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-05 04:29:43.59626 -0700 PDT m=+456.242507501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-656000 -n second-656000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-656000 -n second-656000: exit status 85 (76.812833ms)

                                                
                                                
-- stdout --
	* Profile "second-656000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-656000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-656000" host is not running, skipping log retrieval (state="* Profile \"second-656000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-656000\"")
helpers_test.go:175: Cleaning up "second-656000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-656000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-05 04:29:43.78328 -0700 PDT m=+456.429528376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-650000 -n first-650000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-650000 -n first-650000: exit status 7 (29.954416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-650000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-650000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-650000
--- FAIL: TestMinikubeProfile (10.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-273000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-273000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.989494208s)

                                                
                                                
-- stdout --
	* [mount-start-1-273000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-273000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-273000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-273000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-273000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-273000 -n mount-start-1-273000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-273000 -n mount-start-1-273000: exit status 7 (67.743083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-273000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-127000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-127000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.74057425s)

                                                
                                                
-- stdout --
	* [multinode-127000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-127000" primary control-plane node in "multinode-127000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-127000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:29:54.156421    8759 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:29:54.156612    8759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:54.156616    8759 out.go:304] Setting ErrFile to fd 2...
	I0805 04:29:54.156619    8759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:29:54.156744    8759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:29:54.157864    8759 out.go:298] Setting JSON to false
	I0805 04:29:54.173969    8759 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5364,"bootTime":1722852030,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:29:54.174085    8759 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:29:54.177660    8759 out.go:177] * [multinode-127000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:29:54.185201    8759 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:29:54.185280    8759 notify.go:220] Checking for updates...
	I0805 04:29:54.191129    8759 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:29:54.194181    8759 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:29:54.195706    8759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:29:54.199211    8759 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:29:54.202200    8759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:29:54.205334    8759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:29:54.210145    8759 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:29:54.217149    8759 start.go:297] selected driver: qemu2
	I0805 04:29:54.217155    8759 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:29:54.217161    8759 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:29:54.219466    8759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:29:54.223157    8759 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:29:54.226245    8759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:29:54.226264    8759 cni.go:84] Creating CNI manager for ""
	I0805 04:29:54.226269    8759 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 04:29:54.226274    8759 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 04:29:54.226306    8759 start.go:340] cluster config:
	{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:29:54.229946    8759 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:29:54.237152    8759 out.go:177] * Starting "multinode-127000" primary control-plane node in "multinode-127000" cluster
	I0805 04:29:54.241151    8759 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:29:54.241166    8759 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:29:54.241179    8759 cache.go:56] Caching tarball of preloaded images
	I0805 04:29:54.241243    8759 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:29:54.241249    8759 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:29:54.241507    8759 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/multinode-127000/config.json ...
	I0805 04:29:54.241520    8759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/multinode-127000/config.json: {Name:mk67fe185da4ded3f7ea8c5e86421ad1abb22e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:29:54.241735    8759 start.go:360] acquireMachinesLock for multinode-127000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:29:54.241768    8759 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "multinode-127000"
	I0805 04:29:54.241778    8759 start.go:93] Provisioning new machine with config: &{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:29:54.241814    8759 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:29:54.250149    8759 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:29:54.267313    8759 start.go:159] libmachine.API.Create for "multinode-127000" (driver="qemu2")
	I0805 04:29:54.267340    8759 client.go:168] LocalClient.Create starting
	I0805 04:29:54.267405    8759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:29:54.267434    8759 main.go:141] libmachine: Decoding PEM data...
	I0805 04:29:54.267443    8759 main.go:141] libmachine: Parsing certificate...
	I0805 04:29:54.267484    8759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:29:54.267509    8759 main.go:141] libmachine: Decoding PEM data...
	I0805 04:29:54.267517    8759 main.go:141] libmachine: Parsing certificate...
	I0805 04:29:54.267939    8759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:29:54.410343    8759 main.go:141] libmachine: Creating SSH key...
	I0805 04:29:54.487283    8759 main.go:141] libmachine: Creating Disk image...
	I0805 04:29:54.487293    8759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:29:54.487488    8759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2
	I0805 04:29:54.496522    8759 main.go:141] libmachine: STDOUT: 
	I0805 04:29:54.496540    8759 main.go:141] libmachine: STDERR: 
	I0805 04:29:54.496589    8759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2 +20000M
	I0805 04:29:54.504439    8759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:29:54.504453    8759 main.go:141] libmachine: STDERR: 
	I0805 04:29:54.504463    8759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2
	I0805 04:29:54.504469    8759 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:29:54.504482    8759 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:29:54.504507    8759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:39:2f:68:3c:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2
	I0805 04:29:54.506120    8759 main.go:141] libmachine: STDOUT: 
	I0805 04:29:54.506140    8759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:29:54.506163    8759 client.go:171] duration metric: took 238.820125ms to LocalClient.Create
	I0805 04:29:56.508327    8759 start.go:128] duration metric: took 2.266508416s to createHost
	I0805 04:29:56.508391    8759 start.go:83] releasing machines lock for "multinode-127000", held for 2.266629792s
	W0805 04:29:56.508456    8759 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:29:56.518782    8759 out.go:177] * Deleting "multinode-127000" in qemu2 ...
	W0805 04:29:56.544961    8759 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:29:56.544986    8759 start.go:729] Will try again in 5 seconds ...
	I0805 04:30:01.547116    8759 start.go:360] acquireMachinesLock for multinode-127000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:30:01.547473    8759 start.go:364] duration metric: took 272.75µs to acquireMachinesLock for "multinode-127000"
	I0805 04:30:01.547589    8759 start.go:93] Provisioning new machine with config: &{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:30:01.547777    8759 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:30:01.559406    8759 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:30:01.602767    8759 start.go:159] libmachine.API.Create for "multinode-127000" (driver="qemu2")
	I0805 04:30:01.602817    8759 client.go:168] LocalClient.Create starting
	I0805 04:30:01.602961    8759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:30:01.603030    8759 main.go:141] libmachine: Decoding PEM data...
	I0805 04:30:01.603049    8759 main.go:141] libmachine: Parsing certificate...
	I0805 04:30:01.603123    8759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:30:01.603189    8759 main.go:141] libmachine: Decoding PEM data...
	I0805 04:30:01.603201    8759 main.go:141] libmachine: Parsing certificate...
	I0805 04:30:01.603938    8759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:30:01.752870    8759 main.go:141] libmachine: Creating SSH key...
	I0805 04:30:01.805933    8759 main.go:141] libmachine: Creating Disk image...
	I0805 04:30:01.805938    8759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:30:01.806104    8759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2
	I0805 04:30:01.815088    8759 main.go:141] libmachine: STDOUT: 
	I0805 04:30:01.815105    8759 main.go:141] libmachine: STDERR: 
	I0805 04:30:01.815151    8759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2 +20000M
	I0805 04:30:01.822898    8759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:30:01.822912    8759 main.go:141] libmachine: STDERR: 
	I0805 04:30:01.822920    8759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2
	I0805 04:30:01.822926    8759 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:30:01.822937    8759 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:30:01.822963    8759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:6e:99:7a:f2:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2
	I0805 04:30:01.824494    8759 main.go:141] libmachine: STDOUT: 
	I0805 04:30:01.824513    8759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:30:01.824524    8759 client.go:171] duration metric: took 221.702833ms to LocalClient.Create
	I0805 04:30:03.826683    8759 start.go:128] duration metric: took 2.27890025s to createHost
	I0805 04:30:03.826759    8759 start.go:83] releasing machines lock for "multinode-127000", held for 2.279270209s
	W0805 04:30:03.827084    8759 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-127000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-127000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:30:03.836347    8759 out.go:177] 
	W0805 04:30:03.843565    8759 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:30:03.843611    8759 out.go:239] * 
	* 
	W0805 04:30:03.846473    8759 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:30:03.856279    8759 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-127000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (64.950375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (110.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (58.993459ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-127000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- rollout status deployment/busybox: exit status 1 (55.853875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.769459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.341833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.211208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.530708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.192083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.327333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.192334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.492209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.637084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.672416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.404416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.796ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.754792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.682ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.2205ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (30.684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (110.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.194042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (30.444166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-127000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-127000 -v 3 --alsologtostderr: exit status 83 (41.000125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-127000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-127000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:31:54.175855    9150 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:31:54.176016    9150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:54.176019    9150 out.go:304] Setting ErrFile to fd 2...
	I0805 04:31:54.176021    9150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:54.176157    9150 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:31:54.176393    9150 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:31:54.176581    9150 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:31:54.180873    9150 out.go:177] * The control-plane node multinode-127000 host is not running: state=Stopped
	I0805 04:31:54.184815    9150 out.go:177]   To start a cluster, run: "minikube start -p multinode-127000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-127000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (30.385625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-127000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-127000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.61675ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-127000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-127000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-127000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (30.222125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-127000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-127000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-127000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-127000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (29.83625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status --output json --alsologtostderr: exit status 7 (29.767208ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-127000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:31:54.381755    9162 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:31:54.381900    9162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:54.381903    9162 out.go:304] Setting ErrFile to fd 2...
	I0805 04:31:54.381906    9162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:54.382026    9162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:31:54.382165    9162 out.go:298] Setting JSON to true
	I0805 04:31:54.382174    9162 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:31:54.382237    9162 notify.go:220] Checking for updates...
	I0805 04:31:54.382388    9162 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:31:54.382395    9162 status.go:255] checking status of multinode-127000 ...
	I0805 04:31:54.382604    9162 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:31:54.382608    9162 status.go:343] host is not running, skipping remaining checks
	I0805 04:31:54.382610    9162 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-127000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (30.059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 node stop m03: exit status 85 (46.202875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-127000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status: exit status 7 (29.295125ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status --alsologtostderr: exit status 7 (30.007708ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:31:54.518280    9170 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:31:54.518443    9170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:54.518446    9170 out.go:304] Setting ErrFile to fd 2...
	I0805 04:31:54.518449    9170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:54.518587    9170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:31:54.518703    9170 out.go:298] Setting JSON to false
	I0805 04:31:54.518712    9170 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:31:54.518775    9170 notify.go:220] Checking for updates...
	I0805 04:31:54.518895    9170 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:31:54.518904    9170 status.go:255] checking status of multinode-127000 ...
	I0805 04:31:54.519115    9170 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:31:54.519118    9170 status.go:343] host is not running, skipping remaining checks
	I0805 04:31:54.519120    9170 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-127000 status --alsologtostderr": multinode-127000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (30.155208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (52.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.790625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:31:54.578326    9174 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:31:54.578704    9174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:54.578713    9174 out.go:304] Setting ErrFile to fd 2...
	I0805 04:31:54.578716    9174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:54.578836    9174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:31:54.579081    9174 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:31:54.579267    9174 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:31:54.583417    9174 out.go:177] 
	W0805 04:31:54.586433    9174 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0805 04:31:54.586437    9174 out.go:239] * 
	* 
	W0805 04:31:54.588307    9174 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:31:54.591377    9174 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0805 04:31:54.578326    9174 out.go:291] Setting OutFile to fd 1 ...
I0805 04:31:54.578704    9174 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:31:54.578713    9174 out.go:304] Setting ErrFile to fd 2...
I0805 04:31:54.578716    9174 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 04:31:54.578836    9174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
I0805 04:31:54.579081    9174 mustload.go:65] Loading cluster: multinode-127000
I0805 04:31:54.579267    9174 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 04:31:54.583417    9174 out.go:177] 
W0805 04:31:54.586433    9174 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0805 04:31:54.586437    9174 out.go:239] * 
* 
W0805 04:31:54.588307    9174 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0805 04:31:54.591377    9174 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-127000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr: exit status 7 (30.818167ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:31:54.625541    9176 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:31:54.625705    9176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:54.625708    9176 out.go:304] Setting ErrFile to fd 2...
	I0805 04:31:54.625711    9176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:54.625852    9176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:31:54.625962    9176 out.go:298] Setting JSON to false
	I0805 04:31:54.625971    9176 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:31:54.626031    9176 notify.go:220] Checking for updates...
	I0805 04:31:54.626169    9176 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:31:54.626176    9176 status.go:255] checking status of multinode-127000 ...
	I0805 04:31:54.626416    9176 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:31:54.626419    9176 status.go:343] host is not running, skipping remaining checks
	I0805 04:31:54.626421    9176 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr: exit status 7 (72.674375ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:31:55.470838    9178 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:31:55.471035    9178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:55.471039    9178 out.go:304] Setting ErrFile to fd 2...
	I0805 04:31:55.471042    9178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:55.471229    9178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:31:55.471384    9178 out.go:298] Setting JSON to false
	I0805 04:31:55.471402    9178 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:31:55.471446    9178 notify.go:220] Checking for updates...
	I0805 04:31:55.471659    9178 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:31:55.471668    9178 status.go:255] checking status of multinode-127000 ...
	I0805 04:31:55.471954    9178 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:31:55.471959    9178 status.go:343] host is not running, skipping remaining checks
	I0805 04:31:55.471962    9178 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr: exit status 7 (73.482875ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:31:56.854476    9182 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:31:56.854677    9182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:56.854681    9182 out.go:304] Setting ErrFile to fd 2...
	I0805 04:31:56.854684    9182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:56.854871    9182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:31:56.855021    9182 out.go:298] Setting JSON to false
	I0805 04:31:56.855033    9182 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:31:56.855067    9182 notify.go:220] Checking for updates...
	I0805 04:31:56.855271    9182 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:31:56.855279    9182 status.go:255] checking status of multinode-127000 ...
	I0805 04:31:56.855541    9182 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:31:56.855546    9182 status.go:343] host is not running, skipping remaining checks
	I0805 04:31:56.855549    9182 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr: exit status 7 (75.721583ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:31:59.204426    9184 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:31:59.204629    9184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:59.204634    9184 out.go:304] Setting ErrFile to fd 2...
	I0805 04:31:59.204637    9184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:31:59.204847    9184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:31:59.205008    9184 out.go:298] Setting JSON to false
	I0805 04:31:59.205025    9184 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:31:59.205073    9184 notify.go:220] Checking for updates...
	I0805 04:31:59.205301    9184 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:31:59.205311    9184 status.go:255] checking status of multinode-127000 ...
	I0805 04:31:59.205616    9184 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:31:59.205621    9184 status.go:343] host is not running, skipping remaining checks
	I0805 04:31:59.205624    9184 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr: exit status 7 (73.655458ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:32:01.258918    9186 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:32:01.259095    9186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:01.259099    9186 out.go:304] Setting ErrFile to fd 2...
	I0805 04:32:01.259102    9186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:01.259270    9186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:32:01.259421    9186 out.go:298] Setting JSON to false
	I0805 04:32:01.259432    9186 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:32:01.259485    9186 notify.go:220] Checking for updates...
	I0805 04:32:01.259697    9186 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:32:01.259706    9186 status.go:255] checking status of multinode-127000 ...
	I0805 04:32:01.259981    9186 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:32:01.259985    9186 status.go:343] host is not running, skipping remaining checks
	I0805 04:32:01.259989    9186 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr: exit status 7 (72.127875ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:32:05.990303    9188 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:32:05.990499    9188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:05.990504    9188 out.go:304] Setting ErrFile to fd 2...
	I0805 04:32:05.990508    9188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:05.990678    9188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:32:05.990832    9188 out.go:298] Setting JSON to false
	I0805 04:32:05.990849    9188 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:32:05.990900    9188 notify.go:220] Checking for updates...
	I0805 04:32:05.991118    9188 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:32:05.991126    9188 status.go:255] checking status of multinode-127000 ...
	I0805 04:32:05.991400    9188 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:32:05.991405    9188 status.go:343] host is not running, skipping remaining checks
	I0805 04:32:05.991408    9188 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr: exit status 7 (74.038292ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:32:17.211969    9192 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:32:17.212156    9192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:17.212160    9192 out.go:304] Setting ErrFile to fd 2...
	I0805 04:32:17.212163    9192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:17.212343    9192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:32:17.212502    9192 out.go:298] Setting JSON to false
	I0805 04:32:17.212514    9192 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:32:17.212557    9192 notify.go:220] Checking for updates...
	I0805 04:32:17.212754    9192 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:32:17.212763    9192 status.go:255] checking status of multinode-127000 ...
	I0805 04:32:17.213050    9192 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:32:17.213055    9192 status.go:343] host is not running, skipping remaining checks
	I0805 04:32:17.213058    9192 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr: exit status 7 (74.122292ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:32:30.457689    9202 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:32:30.457884    9202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:30.457888    9202 out.go:304] Setting ErrFile to fd 2...
	I0805 04:32:30.457891    9202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:30.458062    9202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:32:30.458233    9202 out.go:298] Setting JSON to false
	I0805 04:32:30.458245    9202 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:32:30.458282    9202 notify.go:220] Checking for updates...
	I0805 04:32:30.458500    9202 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:32:30.458510    9202 status.go:255] checking status of multinode-127000 ...
	I0805 04:32:30.458786    9202 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:32:30.458791    9202 status.go:343] host is not running, skipping remaining checks
	I0805 04:32:30.458794    9202 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr: exit status 7 (73.488333ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:32:46.673992    9215 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:32:46.674174    9215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:46.674178    9215 out.go:304] Setting ErrFile to fd 2...
	I0805 04:32:46.674180    9215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:46.674336    9215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:32:46.674517    9215 out.go:298] Setting JSON to false
	I0805 04:32:46.674528    9215 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:32:46.674556    9215 notify.go:220] Checking for updates...
	I0805 04:32:46.674804    9215 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:32:46.674813    9215 status.go:255] checking status of multinode-127000 ...
	I0805 04:32:46.675094    9215 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:32:46.675098    9215 status.go:343] host is not running, skipping remaining checks
	I0805 04:32:46.675101    9215 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-127000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (33.01175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (52.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-127000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-127000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-127000: (2.086759292s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-127000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-127000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.22182775s)

                                                
                                                
-- stdout --
	* [multinode-127000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-127000" primary control-plane node in "multinode-127000" cluster
	* Restarting existing qemu2 VM for "multinode-127000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-127000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:32:48.888174    9233 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:32:48.888394    9233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:48.888399    9233 out.go:304] Setting ErrFile to fd 2...
	I0805 04:32:48.888402    9233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:48.888582    9233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:32:48.889845    9233 out.go:298] Setting JSON to false
	I0805 04:32:48.908934    9233 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5538,"bootTime":1722852030,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:32:48.909025    9233 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:32:48.913873    9233 out.go:177] * [multinode-127000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:32:48.920829    9233 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:32:48.920871    9233 notify.go:220] Checking for updates...
	I0805 04:32:48.927761    9233 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:32:48.930784    9233 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:32:48.933831    9233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:32:48.936764    9233 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:32:48.939804    9233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:32:48.942987    9233 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:32:48.943040    9233 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:32:48.947793    9233 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:32:48.954720    9233 start.go:297] selected driver: qemu2
	I0805 04:32:48.954725    9233 start.go:901] validating driver "qemu2" against &{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:32:48.954775    9233 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:32:48.957072    9233 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:32:48.957096    9233 cni.go:84] Creating CNI manager for ""
	I0805 04:32:48.957101    9233 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 04:32:48.957146    9233 start.go:340] cluster config:
	{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:32:48.960793    9233 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:32:48.968648    9233 out.go:177] * Starting "multinode-127000" primary control-plane node in "multinode-127000" cluster
	I0805 04:32:48.972796    9233 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:32:48.972815    9233 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:32:48.972829    9233 cache.go:56] Caching tarball of preloaded images
	I0805 04:32:48.972900    9233 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:32:48.972906    9233 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:32:48.972970    9233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/multinode-127000/config.json ...
	I0805 04:32:48.973446    9233 start.go:360] acquireMachinesLock for multinode-127000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:32:48.973482    9233 start.go:364] duration metric: took 29.666µs to acquireMachinesLock for "multinode-127000"
	I0805 04:32:48.973495    9233 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:32:48.973500    9233 fix.go:54] fixHost starting: 
	I0805 04:32:48.973631    9233 fix.go:112] recreateIfNeeded on multinode-127000: state=Stopped err=<nil>
	W0805 04:32:48.973640    9233 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:32:48.981755    9233 out.go:177] * Restarting existing qemu2 VM for "multinode-127000" ...
	I0805 04:32:48.985795    9233 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:32:48.985839    9233 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:6e:99:7a:f2:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2
	I0805 04:32:48.988015    9233 main.go:141] libmachine: STDOUT: 
	I0805 04:32:48.988036    9233 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:32:48.988067    9233 fix.go:56] duration metric: took 14.567459ms for fixHost
	I0805 04:32:48.988072    9233 start.go:83] releasing machines lock for "multinode-127000", held for 14.585084ms
	W0805 04:32:48.988080    9233 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:32:48.988118    9233 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:32:48.988123    9233 start.go:729] Will try again in 5 seconds ...
	I0805 04:32:53.990409    9233 start.go:360] acquireMachinesLock for multinode-127000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:32:53.990792    9233 start.go:364] duration metric: took 284µs to acquireMachinesLock for "multinode-127000"
	I0805 04:32:53.990931    9233 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:32:53.990951    9233 fix.go:54] fixHost starting: 
	I0805 04:32:53.991622    9233 fix.go:112] recreateIfNeeded on multinode-127000: state=Stopped err=<nil>
	W0805 04:32:53.991648    9233 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:32:53.999978    9233 out.go:177] * Restarting existing qemu2 VM for "multinode-127000" ...
	I0805 04:32:54.004046    9233 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:32:54.004269    9233 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:6e:99:7a:f2:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2
	I0805 04:32:54.013012    9233 main.go:141] libmachine: STDOUT: 
	I0805 04:32:54.013069    9233 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:32:54.013121    9233 fix.go:56] duration metric: took 22.170875ms for fixHost
	I0805 04:32:54.013135    9233 start.go:83] releasing machines lock for "multinode-127000", held for 22.318083ms
	W0805 04:32:54.013268    9233 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-127000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-127000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:32:54.021019    9233 out.go:177] 
	W0805 04:32:54.025093    9233 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:32:54.025141    9233 out.go:239] * 
	* 
	W0805 04:32:54.027798    9233 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:32:54.035012    9233 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-127000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-127000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (33.055291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 node delete m03: exit status 83 (43.29325ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-127000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-127000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-127000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status --alsologtostderr: exit status 7 (30.312458ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:32:54.222898    9247 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:32:54.223065    9247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:54.223068    9247 out.go:304] Setting ErrFile to fd 2...
	I0805 04:32:54.223070    9247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:54.223202    9247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:32:54.223327    9247 out.go:298] Setting JSON to false
	I0805 04:32:54.223336    9247 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:32:54.223400    9247 notify.go:220] Checking for updates...
	I0805 04:32:54.223530    9247 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:32:54.223540    9247 status.go:255] checking status of multinode-127000 ...
	I0805 04:32:54.223753    9247 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:32:54.223757    9247 status.go:343] host is not running, skipping remaining checks
	I0805 04:32:54.223759    9247 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-127000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (29.564167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-127000 stop: (3.249336542s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status: exit status 7 (68.789417ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-127000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-127000 status --alsologtostderr: exit status 7 (33.952667ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:32:57.605295    9271 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:32:57.605443    9271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:57.605446    9271 out.go:304] Setting ErrFile to fd 2...
	I0805 04:32:57.605449    9271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:57.605577    9271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:32:57.605696    9271 out.go:298] Setting JSON to false
	I0805 04:32:57.605705    9271 mustload.go:65] Loading cluster: multinode-127000
	I0805 04:32:57.605754    9271 notify.go:220] Checking for updates...
	I0805 04:32:57.605901    9271 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:32:57.605909    9271 status.go:255] checking status of multinode-127000 ...
	I0805 04:32:57.606110    9271 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0805 04:32:57.606113    9271 status.go:343] host is not running, skipping remaining checks
	I0805 04:32:57.606116    9271 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-127000 status --alsologtostderr": multinode-127000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-127000 status --alsologtostderr": multinode-127000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (29.501667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-127000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-127000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178029084s)

                                                
                                                
-- stdout --
	* [multinode-127000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-127000" primary control-plane node in "multinode-127000" cluster
	* Restarting existing qemu2 VM for "multinode-127000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-127000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:32:57.664709    9275 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:32:57.664835    9275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:57.664839    9275 out.go:304] Setting ErrFile to fd 2...
	I0805 04:32:57.664841    9275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:32:57.664966    9275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:32:57.665895    9275 out.go:298] Setting JSON to false
	I0805 04:32:57.682156    9275 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5547,"bootTime":1722852030,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:32:57.682250    9275 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:32:57.685820    9275 out.go:177] * [multinode-127000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:32:57.693748    9275 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:32:57.693811    9275 notify.go:220] Checking for updates...
	I0805 04:32:57.700677    9275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:32:57.703726    9275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:32:57.706711    9275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:32:57.709724    9275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:32:57.712702    9275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:32:57.716041    9275 config.go:182] Loaded profile config "multinode-127000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:32:57.716303    9275 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:32:57.720642    9275 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:32:57.727703    9275 start.go:297] selected driver: qemu2
	I0805 04:32:57.727718    9275 start.go:901] validating driver "qemu2" against &{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:32:57.727772    9275 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:32:57.730086    9275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:32:57.730126    9275 cni.go:84] Creating CNI manager for ""
	I0805 04:32:57.730131    9275 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 04:32:57.730183    9275 start.go:340] cluster config:
	{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:32:57.733751    9275 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:32:57.741756    9275 out.go:177] * Starting "multinode-127000" primary control-plane node in "multinode-127000" cluster
	I0805 04:32:57.744668    9275 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:32:57.744687    9275 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:32:57.744699    9275 cache.go:56] Caching tarball of preloaded images
	I0805 04:32:57.744753    9275 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:32:57.744758    9275 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:32:57.744821    9275 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/multinode-127000/config.json ...
	I0805 04:32:57.745284    9275 start.go:360] acquireMachinesLock for multinode-127000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:32:57.745315    9275 start.go:364] duration metric: took 20.459µs to acquireMachinesLock for "multinode-127000"
	I0805 04:32:57.745323    9275 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:32:57.745329    9275 fix.go:54] fixHost starting: 
	I0805 04:32:57.745444    9275 fix.go:112] recreateIfNeeded on multinode-127000: state=Stopped err=<nil>
	W0805 04:32:57.745452    9275 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:32:57.752682    9275 out.go:177] * Restarting existing qemu2 VM for "multinode-127000" ...
	I0805 04:32:57.756653    9275 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:32:57.756688    9275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:6e:99:7a:f2:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2
	I0805 04:32:57.758593    9275 main.go:141] libmachine: STDOUT: 
	I0805 04:32:57.758613    9275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:32:57.758640    9275 fix.go:56] duration metric: took 13.311375ms for fixHost
	I0805 04:32:57.758645    9275 start.go:83] releasing machines lock for "multinode-127000", held for 13.326167ms
	W0805 04:32:57.758652    9275 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:32:57.758680    9275 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:32:57.758686    9275 start.go:729] Will try again in 5 seconds ...
	I0805 04:33:02.760979    9275 start.go:360] acquireMachinesLock for multinode-127000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:33:02.761363    9275 start.go:364] duration metric: took 283.166µs to acquireMachinesLock for "multinode-127000"
	I0805 04:33:02.761497    9275 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:33:02.761516    9275 fix.go:54] fixHost starting: 
	I0805 04:33:02.762278    9275 fix.go:112] recreateIfNeeded on multinode-127000: state=Stopped err=<nil>
	W0805 04:33:02.762306    9275 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:33:02.766724    9275 out.go:177] * Restarting existing qemu2 VM for "multinode-127000" ...
	I0805 04:33:02.770647    9275 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:33:02.770903    9275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:6e:99:7a:f2:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/multinode-127000/disk.qcow2
	I0805 04:33:02.779890    9275 main.go:141] libmachine: STDOUT: 
	I0805 04:33:02.779947    9275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:33:02.780011    9275 fix.go:56] duration metric: took 18.495916ms for fixHost
	I0805 04:33:02.780034    9275 start.go:83] releasing machines lock for "multinode-127000", held for 18.648625ms
	W0805 04:33:02.780215    9275 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-127000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-127000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:33:02.787656    9275 out.go:177] 
	W0805 04:33:02.791692    9275 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:33:02.791732    9275 out.go:239] * 
	* 
	W0805 04:33:02.794171    9275 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:33:02.802657    9275 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-127000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (67.108625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-127000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-127000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-127000-m01 --driver=qemu2 : exit status 80 (10.042412791s)

                                                
                                                
-- stdout --
	* [multinode-127000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-127000-m01" primary control-plane node in "multinode-127000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-127000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-127000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-127000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-127000-m02 --driver=qemu2 : exit status 80 (10.264639541s)

                                                
                                                
-- stdout --
	* [multinode-127000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-127000-m02" primary control-plane node in "multinode-127000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-127000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-127000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-127000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-127000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-127000: exit status 83 (82.730875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-127000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-127000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-127000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-127000 -n multinode-127000: exit status 7 (30.104875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-127000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.53s)

                                                
                                    
x
+
TestPreload (10.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-907000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-907000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.886499417s)

                                                
                                                
-- stdout --
	* [test-preload-907000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-907000" primary control-plane node in "test-preload-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:33:23.551650    9332 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:33:23.551789    9332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:33:23.551792    9332 out.go:304] Setting ErrFile to fd 2...
	I0805 04:33:23.551795    9332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:33:23.551926    9332 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:33:23.552996    9332 out.go:298] Setting JSON to false
	I0805 04:33:23.568921    9332 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5573,"bootTime":1722852030,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:33:23.568987    9332 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:33:23.575037    9332 out.go:177] * [test-preload-907000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:33:23.579550    9332 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:33:23.579612    9332 notify.go:220] Checking for updates...
	I0805 04:33:23.586997    9332 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:33:23.588501    9332 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:33:23.591965    9332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:33:23.594987    9332 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:33:23.598005    9332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:33:23.601384    9332 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:33:23.601433    9332 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:33:23.605959    9332 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:33:23.612981    9332 start.go:297] selected driver: qemu2
	I0805 04:33:23.612988    9332 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:33:23.612994    9332 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:33:23.615229    9332 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:33:23.617992    9332 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:33:23.621125    9332 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:33:23.621150    9332 cni.go:84] Creating CNI manager for ""
	I0805 04:33:23.621165    9332 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:33:23.621170    9332 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:33:23.621194    9332 start.go:340] cluster config:
	{Name:test-preload-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:33:23.624928    9332 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:33:23.632986    9332 out.go:177] * Starting "test-preload-907000" primary control-plane node in "test-preload-907000" cluster
	I0805 04:33:23.635936    9332 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0805 04:33:23.635998    9332 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/test-preload-907000/config.json ...
	I0805 04:33:23.636011    9332 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/test-preload-907000/config.json: {Name:mkca954705af1ab7250972138e81c662deb78191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:33:23.636041    9332 cache.go:107] acquiring lock: {Name:mkdd5f63d152699c168c9a3f5a57a1feefea632e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:33:23.636056    9332 cache.go:107] acquiring lock: {Name:mk0a7819add7465fad2fd0a86cd140be57dd6847 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:33:23.636108    9332 cache.go:107] acquiring lock: {Name:mk2e2b83fd0465d685cd5197466e0d450d2c9643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:33:23.636058    9332 cache.go:107] acquiring lock: {Name:mka8c6f5429536f3c5cedff10be7e23aaad1c91d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:33:23.636264    9332 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0805 04:33:23.636263    9332 cache.go:107] acquiring lock: {Name:mk02e047f80bf71aeadd809bd5e117dd57e72286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:33:23.636293    9332 cache.go:107] acquiring lock: {Name:mkf9d89bc3fd47229b6804a2c511ff37883350a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:33:23.636272    9332 cache.go:107] acquiring lock: {Name:mke72f5a7ca24fd6e7e4136390d37c100ba6d26a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:33:23.636041    9332 cache.go:107] acquiring lock: {Name:mkeb818093555f91bcb12fbec7cbbc603d3fe01e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:33:23.636398    9332 start.go:360] acquireMachinesLock for test-preload-907000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:33:23.636403    9332 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 04:33:23.636434    9332 start.go:364] duration metric: took 28.166µs to acquireMachinesLock for "test-preload-907000"
	I0805 04:33:23.636453    9332 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:33:23.636444    9332 start.go:93] Provisioning new machine with config: &{Name:test-preload-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:33:23.636506    9332 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:33:23.636515    9332 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:33:23.636534    9332 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0805 04:33:23.636539    9332 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:33:23.636543    9332 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0805 04:33:23.636948    9332 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0805 04:33:23.644934    9332 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:33:23.648445    9332 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0805 04:33:23.648512    9332 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 04:33:23.649200    9332 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:33:23.649944    9332 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0805 04:33:23.650001    9332 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0805 04:33:23.650019    9332 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:33:23.650028    9332 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0805 04:33:23.650062    9332 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:33:23.662688    9332 start.go:159] libmachine.API.Create for "test-preload-907000" (driver="qemu2")
	I0805 04:33:23.662711    9332 client.go:168] LocalClient.Create starting
	I0805 04:33:23.662826    9332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:33:23.662858    9332 main.go:141] libmachine: Decoding PEM data...
	I0805 04:33:23.662867    9332 main.go:141] libmachine: Parsing certificate...
	I0805 04:33:23.662906    9332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:33:23.662928    9332 main.go:141] libmachine: Decoding PEM data...
	I0805 04:33:23.662935    9332 main.go:141] libmachine: Parsing certificate...
	I0805 04:33:23.663297    9332 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:33:23.819820    9332 main.go:141] libmachine: Creating SSH key...
	I0805 04:33:23.895892    9332 main.go:141] libmachine: Creating Disk image...
	I0805 04:33:23.895911    9332 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:33:23.896107    9332 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2
	I0805 04:33:23.906746    9332 main.go:141] libmachine: STDOUT: 
	I0805 04:33:23.906797    9332 main.go:141] libmachine: STDERR: 
	I0805 04:33:23.906887    9332 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2 +20000M
	I0805 04:33:23.915483    9332 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:33:23.915501    9332 main.go:141] libmachine: STDERR: 
	I0805 04:33:23.915514    9332 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2
	I0805 04:33:23.915517    9332 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:33:23.915531    9332 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:33:23.915559    9332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:32:2b:d6:a8:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2
	I0805 04:33:23.917695    9332 main.go:141] libmachine: STDOUT: 
	I0805 04:33:23.917713    9332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:33:23.917731    9332 client.go:171] duration metric: took 255.012625ms to LocalClient.Create
	I0805 04:33:24.018914    9332 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 04:33:24.037211    9332 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0805 04:33:24.064597    9332 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 04:33:24.064621    9332 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 04:33:24.064976    9332 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0805 04:33:24.088072    9332 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0805 04:33:24.111217    9332 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 04:33:24.141105    9332 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0805 04:33:24.168557    9332 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0805 04:33:24.168583    9332 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 532.517833ms
	I0805 04:33:24.168604    9332 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0805 04:33:24.672082    9332 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 04:33:24.672176    9332 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 04:33:24.923817    9332 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0805 04:33:24.923879    9332 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.287803417s
	I0805 04:33:24.923904    9332 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0805 04:33:25.918067    9332 start.go:128] duration metric: took 2.2815125s to createHost
	I0805 04:33:25.918128    9332 start.go:83] releasing machines lock for "test-preload-907000", held for 2.2816635s
	W0805 04:33:25.918192    9332 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:33:25.934581    9332 out.go:177] * Deleting "test-preload-907000" in qemu2 ...
	W0805 04:33:25.963416    9332 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:33:25.963443    9332 start.go:729] Will try again in 5 seconds ...
	I0805 04:33:26.687498    9332 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0805 04:33:26.687565    9332 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.051312583s
	I0805 04:33:26.687595    9332 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0805 04:33:27.189717    9332 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0805 04:33:27.189778    9332 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.553496583s
	I0805 04:33:27.189808    9332 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0805 04:33:28.330831    9332 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0805 04:33:28.330878    9332 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.694795584s
	I0805 04:33:28.330897    9332 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0805 04:33:28.809784    9332 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0805 04:33:28.809834    9332 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.173748417s
	I0805 04:33:28.809878    9332 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0805 04:33:29.170616    9332 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0805 04:33:29.170658    9332 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.5343855s
	I0805 04:33:29.170685    9332 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0805 04:33:30.963683    9332 start.go:360] acquireMachinesLock for test-preload-907000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:33:30.964117    9332 start.go:364] duration metric: took 354.5µs to acquireMachinesLock for "test-preload-907000"
	I0805 04:33:30.964236    9332 start.go:93] Provisioning new machine with config: &{Name:test-preload-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:33:30.964463    9332 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:33:30.971134    9332 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:33:31.020906    9332 start.go:159] libmachine.API.Create for "test-preload-907000" (driver="qemu2")
	I0805 04:33:31.020947    9332 client.go:168] LocalClient.Create starting
	I0805 04:33:31.021086    9332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:33:31.021147    9332 main.go:141] libmachine: Decoding PEM data...
	I0805 04:33:31.021170    9332 main.go:141] libmachine: Parsing certificate...
	I0805 04:33:31.021235    9332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:33:31.021284    9332 main.go:141] libmachine: Decoding PEM data...
	I0805 04:33:31.021299    9332 main.go:141] libmachine: Parsing certificate...
	I0805 04:33:31.021808    9332 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:33:31.179434    9332 main.go:141] libmachine: Creating SSH key...
	I0805 04:33:31.344709    9332 main.go:141] libmachine: Creating Disk image...
	I0805 04:33:31.344721    9332 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:33:31.344927    9332 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2
	I0805 04:33:31.354627    9332 main.go:141] libmachine: STDOUT: 
	I0805 04:33:31.354644    9332 main.go:141] libmachine: STDERR: 
	I0805 04:33:31.354684    9332 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2 +20000M
	I0805 04:33:31.362737    9332 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:33:31.362752    9332 main.go:141] libmachine: STDERR: 
	I0805 04:33:31.362765    9332 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2
	I0805 04:33:31.362768    9332 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:33:31.362777    9332 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:33:31.362818    9332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:ad:be:45:2a:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/test-preload-907000/disk.qcow2
	I0805 04:33:31.364588    9332 main.go:141] libmachine: STDOUT: 
	I0805 04:33:31.364605    9332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:33:31.364631    9332 client.go:171] duration metric: took 343.676083ms to LocalClient.Create
	I0805 04:33:33.192655    9332 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0805 04:33:33.192721    9332 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.556533417s
	I0805 04:33:33.192788    9332 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0805 04:33:33.192853    9332 cache.go:87] Successfully saved all images to host disk.
	I0805 04:33:33.366820    9332 start.go:128] duration metric: took 2.40228675s to createHost
	I0805 04:33:33.366879    9332 start.go:83] releasing machines lock for "test-preload-907000", held for 2.402710792s
	W0805 04:33:33.367102    9332 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:33:33.377727    9332 out.go:177] 
	W0805 04:33:33.383731    9332 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:33:33.383758    9332 out.go:239] * 
	* 
	W0805 04:33:33.386594    9332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:33:33.396604    9332 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-907000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-05 04:33:33.414485 -0700 PDT m=+685.990505126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-907000 -n test-preload-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-907000 -n test-preload-907000: exit status 7 (66.91875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-907000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-907000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-907000
--- FAIL: TestPreload (10.03s)

                                                
                                    
x
+
TestScheduledStopUnix (10s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-517000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-517000 --memory=2048 --driver=qemu2 : exit status 80 (9.848752959s)

                                                
                                                
-- stdout --
	* [scheduled-stop-517000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-517000" primary control-plane node in "scheduled-stop-517000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-517000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-517000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-517000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-517000" primary control-plane node in "scheduled-stop-517000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-517000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-517000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-05 04:33:43.405134 -0700 PDT m=+695.981057126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-517000 -n scheduled-stop-517000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-517000 -n scheduled-stop-517000: exit status 7 (67.88ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-517000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-517000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-517000
--- FAIL: TestScheduledStopUnix (10.00s)

                                                
                                    
x
+
TestSkaffold (12.75s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1459087121 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1459087121 version: (1.055745333s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-168000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-168000 --memory=2600 --driver=qemu2 : exit status 80 (9.783784083s)

                                                
                                                
-- stdout --
	* [skaffold-168000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-168000" primary control-plane node in "skaffold-168000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-168000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-168000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-168000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-168000" primary control-plane node in "skaffold-168000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-168000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-168000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-05 04:33:56.166825 -0700 PDT m=+708.742623709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-168000 -n skaffold-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-168000 -n skaffold-168000: exit status 7 (63.090792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-168000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-168000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-168000
--- FAIL: TestSkaffold (12.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (595.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.333325192 start -p running-upgrade-763000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.333325192 start -p running-upgrade-763000 --memory=2200 --vm-driver=qemu2 : (49.503697459s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-763000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-763000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m32.470546583s)

                                                
                                                
-- stdout --
	* [running-upgrade-763000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-763000" primary control-plane node in "running-upgrade-763000" cluster
	* Updating the running qemu2 "running-upgrade-763000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:35:27.406019    9720 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:35:27.406153    9720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:35:27.406157    9720 out.go:304] Setting ErrFile to fd 2...
	I0805 04:35:27.406160    9720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:35:27.406315    9720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:35:27.407478    9720 out.go:298] Setting JSON to false
	I0805 04:35:27.424073    9720 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5697,"bootTime":1722852030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:35:27.424139    9720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:35:27.429037    9720 out.go:177] * [running-upgrade-763000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:35:27.435844    9720 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:35:27.435882    9720 notify.go:220] Checking for updates...
	I0805 04:35:27.442983    9720 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:35:27.445918    9720 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:35:27.448955    9720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:35:27.451998    9720 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:35:27.453347    9720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:35:27.456186    9720 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:35:27.459966    9720 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 04:35:27.462948    9720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:35:27.466976    9720 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:35:27.473980    9720 start.go:297] selected driver: qemu2
	I0805 04:35:27.473984    9720 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51233 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 04:35:27.474029    9720 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:35:27.476309    9720 cni.go:84] Creating CNI manager for ""
	I0805 04:35:27.476327    9720 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:35:27.476354    9720 start.go:340] cluster config:
	{Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51233 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 04:35:27.476396    9720 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:35:27.484011    9720 out.go:177] * Starting "running-upgrade-763000" primary control-plane node in "running-upgrade-763000" cluster
	I0805 04:35:27.487941    9720 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 04:35:27.487958    9720 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0805 04:35:27.487967    9720 cache.go:56] Caching tarball of preloaded images
	I0805 04:35:27.488026    9720 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:35:27.488031    9720 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0805 04:35:27.488091    9720 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/config.json ...
	I0805 04:35:27.488525    9720 start.go:360] acquireMachinesLock for running-upgrade-763000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:35:27.488556    9720 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "running-upgrade-763000"
	I0805 04:35:27.488563    9720 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:35:27.488568    9720 fix.go:54] fixHost starting: 
	I0805 04:35:27.489184    9720 fix.go:112] recreateIfNeeded on running-upgrade-763000: state=Running err=<nil>
	W0805 04:35:27.489193    9720 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:35:27.495898    9720 out.go:177] * Updating the running qemu2 "running-upgrade-763000" VM ...
	I0805 04:35:27.499941    9720 machine.go:94] provisionDockerMachine start ...
	I0805 04:35:27.499979    9720 main.go:141] libmachine: Using SSH client type: native
	I0805 04:35:27.500098    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 51201 <nil> <nil>}
	I0805 04:35:27.500103    9720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 04:35:27.552190    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-763000
	
	I0805 04:35:27.552205    9720 buildroot.go:166] provisioning hostname "running-upgrade-763000"
	I0805 04:35:27.552272    9720 main.go:141] libmachine: Using SSH client type: native
	I0805 04:35:27.552399    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 51201 <nil> <nil>}
	I0805 04:35:27.552404    9720 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-763000 && echo "running-upgrade-763000" | sudo tee /etc/hostname
	I0805 04:35:27.607417    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-763000
	
	I0805 04:35:27.607475    9720 main.go:141] libmachine: Using SSH client type: native
	I0805 04:35:27.607634    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 51201 <nil> <nil>}
	I0805 04:35:27.607645    9720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-763000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-763000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-763000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 04:35:27.661208    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 04:35:27.661219    9720 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19377-7130/.minikube CaCertPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19377-7130/.minikube}
	I0805 04:35:27.661226    9720 buildroot.go:174] setting up certificates
	I0805 04:35:27.661230    9720 provision.go:84] configureAuth start
	I0805 04:35:27.661237    9720 provision.go:143] copyHostCerts
	I0805 04:35:27.661305    9720 exec_runner.go:144] found /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.pem, removing ...
	I0805 04:35:27.661310    9720 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.pem
	I0805 04:35:27.661428    9720 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.pem (1078 bytes)
	I0805 04:35:27.661602    9720 exec_runner.go:144] found /Users/jenkins/minikube-integration/19377-7130/.minikube/cert.pem, removing ...
	I0805 04:35:27.661606    9720 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19377-7130/.minikube/cert.pem
	I0805 04:35:27.661649    9720 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19377-7130/.minikube/cert.pem (1123 bytes)
	I0805 04:35:27.661750    9720 exec_runner.go:144] found /Users/jenkins/minikube-integration/19377-7130/.minikube/key.pem, removing ...
	I0805 04:35:27.661753    9720 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19377-7130/.minikube/key.pem
	I0805 04:35:27.661791    9720 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19377-7130/.minikube/key.pem (1675 bytes)
	I0805 04:35:27.661888    9720 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-763000 san=[127.0.0.1 localhost minikube running-upgrade-763000]
	I0805 04:35:27.798636    9720 provision.go:177] copyRemoteCerts
	I0805 04:35:27.798683    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 04:35:27.798692    9720 sshutil.go:53] new ssh client: &{IP:localhost Port:51201 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I0805 04:35:27.826965    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 04:35:27.834176    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0805 04:35:27.840911    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0805 04:35:27.847490    9720 provision.go:87] duration metric: took 186.25325ms to configureAuth
	I0805 04:35:27.847498    9720 buildroot.go:189] setting minikube options for container-runtime
	I0805 04:35:27.847594    9720 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:35:27.847625    9720 main.go:141] libmachine: Using SSH client type: native
	I0805 04:35:27.847715    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 51201 <nil> <nil>}
	I0805 04:35:27.847720    9720 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 04:35:27.899285    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 04:35:27.899295    9720 buildroot.go:70] root file system type: tmpfs
	I0805 04:35:27.899342    9720 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 04:35:27.899382    9720 main.go:141] libmachine: Using SSH client type: native
	I0805 04:35:27.899498    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 51201 <nil> <nil>}
	I0805 04:35:27.899530    9720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 04:35:27.954655    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 04:35:27.954708    9720 main.go:141] libmachine: Using SSH client type: native
	I0805 04:35:27.954839    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 51201 <nil> <nil>}
	I0805 04:35:27.954847    9720 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 04:35:28.008332    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 04:35:28.008342    9720 machine.go:97] duration metric: took 508.390125ms to provisionDockerMachine
	I0805 04:35:28.008348    9720 start.go:293] postStartSetup for "running-upgrade-763000" (driver="qemu2")
	I0805 04:35:28.008354    9720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 04:35:28.008399    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 04:35:28.008408    9720 sshutil.go:53] new ssh client: &{IP:localhost Port:51201 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I0805 04:35:28.039871    9720 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 04:35:28.041548    9720 info.go:137] Remote host: Buildroot 2021.02.12
	I0805 04:35:28.041557    9720 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19377-7130/.minikube/addons for local assets ...
	I0805 04:35:28.041656    9720 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19377-7130/.minikube/files for local assets ...
	I0805 04:35:28.041754    9720 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem -> 76242.pem in /etc/ssl/certs
	I0805 04:35:28.041850    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 04:35:28.044991    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem --> /etc/ssl/certs/76242.pem (1708 bytes)
	I0805 04:35:28.051487    9720 start.go:296] duration metric: took 43.133959ms for postStartSetup
	I0805 04:35:28.051502    9720 fix.go:56] duration metric: took 562.930417ms for fixHost
	I0805 04:35:28.051537    9720 main.go:141] libmachine: Using SSH client type: native
	I0805 04:35:28.051694    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c4ea10] 0x104c51270 <nil>  [] 0s} localhost 51201 <nil> <nil>}
	I0805 04:35:28.051699    9720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 04:35:28.102628    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722857727.817235930
	
	I0805 04:35:28.102639    9720 fix.go:216] guest clock: 1722857727.817235930
	I0805 04:35:28.102643    9720 fix.go:229] Guest: 2024-08-05 04:35:27.81723593 -0700 PDT Remote: 2024-08-05 04:35:28.051504 -0700 PDT m=+0.664924626 (delta=-234.26807ms)
	I0805 04:35:28.102654    9720 fix.go:200] guest clock delta is within tolerance: -234.26807ms
	I0805 04:35:28.102657    9720 start.go:83] releasing machines lock for "running-upgrade-763000", held for 614.091084ms
	I0805 04:35:28.102726    9720 ssh_runner.go:195] Run: cat /version.json
	I0805 04:35:28.102734    9720 sshutil.go:53] new ssh client: &{IP:localhost Port:51201 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I0805 04:35:28.102726    9720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 04:35:28.102763    9720 sshutil.go:53] new ssh client: &{IP:localhost Port:51201 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	W0805 04:35:28.103283    9720 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51316->127.0.0.1:51201: write: broken pipe
	I0805 04:35:28.103302    9720 retry.go:31] will retry after 264.873062ms: ssh: handshake failed: write tcp 127.0.0.1:51316->127.0.0.1:51201: write: broken pipe
	W0805 04:35:28.400366    9720 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0805 04:35:28.400419    9720 ssh_runner.go:195] Run: systemctl --version
	I0805 04:35:28.402196    9720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 04:35:28.403763    9720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 04:35:28.403791    9720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0805 04:35:28.406961    9720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0805 04:35:28.411134    9720 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 04:35:28.411140    9720 start.go:495] detecting cgroup driver to use...
	I0805 04:35:28.411246    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 04:35:28.416826    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0805 04:35:28.419690    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 04:35:28.422967    9720 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 04:35:28.422985    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 04:35:28.426272    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 04:35:28.429272    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 04:35:28.432157    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 04:35:28.435201    9720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 04:35:28.438388    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 04:35:28.441422    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 04:35:28.444284    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 04:35:28.447322    9720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 04:35:28.450530    9720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 04:35:28.453438    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:35:28.544956    9720 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 04:35:28.553440    9720 start.go:495] detecting cgroup driver to use...
	I0805 04:35:28.553523    9720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 04:35:28.560880    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 04:35:28.566618    9720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 04:35:28.572767    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 04:35:28.577802    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 04:35:28.582313    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 04:35:28.587938    9720 ssh_runner.go:195] Run: which cri-dockerd
	I0805 04:35:28.589248    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 04:35:28.591848    9720 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 04:35:28.596806    9720 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 04:35:28.659212    9720 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 04:35:28.752379    9720 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 04:35:28.752433    9720 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 04:35:28.758335    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:35:28.844913    9720 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 04:35:41.481845    9720 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.636792875s)
	I0805 04:35:41.481920    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 04:35:41.487121    9720 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 04:35:41.495778    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 04:35:41.500714    9720 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 04:35:41.572926    9720 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 04:35:41.647901    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:35:41.730407    9720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 04:35:41.736744    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 04:35:41.742093    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:35:41.818476    9720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 04:35:41.858429    9720 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 04:35:41.858525    9720 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 04:35:41.860907    9720 start.go:563] Will wait 60s for crictl version
	I0805 04:35:41.860960    9720 ssh_runner.go:195] Run: which crictl
	I0805 04:35:41.862352    9720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 04:35:41.874505    9720 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0805 04:35:41.874576    9720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 04:35:41.887446    9720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 04:35:41.904521    9720 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0805 04:35:41.904588    9720 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0805 04:35:41.906093    9720 kubeadm.go:883] updating cluster {Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51233 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0805 04:35:41.906135    9720 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 04:35:41.906170    9720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 04:35:41.916907    9720 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 04:35:41.916915    9720 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 04:35:41.916965    9720 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 04:35:41.919825    9720 ssh_runner.go:195] Run: which lz4
	I0805 04:35:41.921242    9720 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0805 04:35:41.922405    9720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 04:35:41.922416    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0805 04:35:42.901385    9720 docker.go:649] duration metric: took 980.165042ms to copy over tarball
	I0805 04:35:42.901442    9720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 04:35:44.179860    9720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.278392875s)
	I0805 04:35:44.179875    9720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 04:35:44.196169    9720 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 04:35:44.199835    9720 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0805 04:35:44.205123    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:35:44.279525    9720 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 04:35:45.431706    9720 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.152154791s)
	I0805 04:35:45.431802    9720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 04:35:45.448162    9720 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 04:35:45.448170    9720 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 04:35:45.448175    9720 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 04:35:45.452217    9720 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:35:45.455358    9720 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:35:45.457734    9720 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:35:45.457879    9720 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:35:45.460147    9720 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:35:45.460190    9720 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:35:45.461868    9720 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:35:45.461888    9720 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:35:45.463287    9720 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:35:45.463775    9720 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:35:45.465271    9720 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:35:45.465438    9720 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:35:45.466111    9720 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:35:45.466114    9720 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 04:35:45.467195    9720 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:35:45.468143    9720 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 04:35:45.881344    9720 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:35:45.894630    9720 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0805 04:35:45.894677    9720 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:35:45.894731    9720 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:35:45.894732    9720 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:35:45.895889    9720 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:35:45.898975    9720 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:35:45.911954    9720 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0805 04:35:45.911986    9720 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:35:45.912046    9720 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:35:45.920254    9720 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0805 04:35:45.926909    9720 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0805 04:35:45.927896    9720 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 04:35:45.927989    9720 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:35:45.928949    9720 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0805 04:35:45.928965    9720 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:35:45.928984    9720 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0805 04:35:45.928992    9720 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:35:45.928997    9720 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:35:45.929021    9720 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:35:45.934473    9720 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0805 04:35:45.944813    9720 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0805 04:35:45.944836    9720 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0805 04:35:45.944890    9720 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0805 04:35:45.956718    9720 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0805 04:35:45.956740    9720 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:35:45.956796    9720 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:35:45.960604    9720 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0805 04:35:45.963272    9720 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0805 04:35:45.963296    9720 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 04:35:45.963276    9720 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0805 04:35:45.963402    9720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0805 04:35:45.977253    9720 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 04:35:45.977329    9720 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0805 04:35:45.977339    9720 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0805 04:35:45.977349    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0805 04:35:45.977350    9720 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:35:45.977372    9720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0805 04:35:45.977397    9720 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0805 04:35:45.986084    9720 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0805 04:35:45.986097    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0805 04:35:46.001488    9720 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0805 04:35:46.001500    9720 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 04:35:46.001516    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0805 04:35:46.001603    9720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0805 04:35:46.035149    9720 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0805 04:35:46.035161    9720 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0805 04:35:46.035179    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0805 04:35:46.095890    9720 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0805 04:35:46.095907    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0805 04:35:46.097835    9720 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 04:35:46.097926    9720 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:35:46.206863    9720 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0805 04:35:46.206864    9720 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0805 04:35:46.206890    9720 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:35:46.206950    9720 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:35:46.337348    9720 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0805 04:35:46.337362    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0805 04:35:46.501714    9720 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 04:35:46.501728    9720 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0805 04:35:46.501833    9720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0805 04:35:46.503193    9720 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0805 04:35:46.503204    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0805 04:35:46.534200    9720 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 04:35:46.534216    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0805 04:35:46.773554    9720 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 04:35:46.773585    9720 cache_images.go:92] duration metric: took 1.325391s to LoadCachedImages
	W0805 04:35:46.773632    9720 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0805 04:35:46.773641    9720 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0805 04:35:46.773704    9720 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-763000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 04:35:46.773765    9720 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 04:35:46.787133    9720 cni.go:84] Creating CNI manager for ""
	I0805 04:35:46.787143    9720 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:35:46.787151    9720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 04:35:46.787159    9720 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-763000 NodeName:running-upgrade-763000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 04:35:46.787226    9720 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-763000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 04:35:46.787277    9720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0805 04:35:46.790318    9720 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 04:35:46.790350    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 04:35:46.793318    9720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0805 04:35:46.798503    9720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 04:35:46.803539    9720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0805 04:35:46.808591    9720 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0805 04:35:46.809856    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:35:46.873922    9720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 04:35:46.878878    9720 certs.go:68] Setting up /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000 for IP: 10.0.2.15
	I0805 04:35:46.878883    9720 certs.go:194] generating shared ca certs ...
	I0805 04:35:46.878891    9720 certs.go:226] acquiring lock for ca certs: {Name:mk0fb10f8f63b8d852122cff16e2a9135337707a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:35:46.879127    9720 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.key
	I0805 04:35:46.879162    9720 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/proxy-client-ca.key
	I0805 04:35:46.879167    9720 certs.go:256] generating profile certs ...
	I0805 04:35:46.879238    9720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/client.key
	I0805 04:35:46.879249    9720 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee
	I0805 04:35:46.879261    9720 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0805 04:35:46.931019    9720 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee ...
	I0805 04:35:46.931023    9720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee: {Name:mk2240ccd3034c4ad89b7ff9e98120d5ff3ee731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:35:46.931238    9720 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee ...
	I0805 04:35:46.931244    9720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee: {Name:mk813bc42a5d2b8e5d59bc8367540ab1fd370829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:35:46.931360    9720 certs.go:381] copying /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee -> /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.crt
	I0805 04:35:46.931548    9720 certs.go:385] copying /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee -> /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.key
	I0805 04:35:46.931706    9720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/proxy-client.key
	I0805 04:35:46.931828    9720 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/7624.pem (1338 bytes)
	W0805 04:35:46.931850    9720 certs.go:480] ignoring /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/7624_empty.pem, impossibly tiny 0 bytes
	I0805 04:35:46.931855    9720 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 04:35:46.931873    9720 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem (1078 bytes)
	I0805 04:35:46.931891    9720 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem (1123 bytes)
	I0805 04:35:46.931909    9720 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/key.pem (1675 bytes)
	I0805 04:35:46.931947    9720 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem (1708 bytes)
	I0805 04:35:46.932292    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 04:35:46.939534    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 04:35:46.947134    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 04:35:46.954678    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 04:35:46.961853    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 04:35:46.968539    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 04:35:46.975084    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 04:35:46.982461    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 04:35:46.990067    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/7624.pem --> /usr/share/ca-certificates/7624.pem (1338 bytes)
	I0805 04:35:46.997505    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem --> /usr/share/ca-certificates/76242.pem (1708 bytes)
	I0805 04:35:47.004811    9720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 04:35:47.011587    9720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 04:35:47.016311    9720 ssh_runner.go:195] Run: openssl version
	I0805 04:35:47.018043    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 04:35:47.021616    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 04:35:47.023057    9720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:35 /usr/share/ca-certificates/minikubeCA.pem
	I0805 04:35:47.023078    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 04:35:47.024877    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 04:35:47.027830    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7624.pem && ln -fs /usr/share/ca-certificates/7624.pem /etc/ssl/certs/7624.pem"
	I0805 04:35:47.030732    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7624.pem
	I0805 04:35:47.032129    9720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:23 /usr/share/ca-certificates/7624.pem
	I0805 04:35:47.032150    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7624.pem
	I0805 04:35:47.033836    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7624.pem /etc/ssl/certs/51391683.0"
	I0805 04:35:47.036970    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76242.pem && ln -fs /usr/share/ca-certificates/76242.pem /etc/ssl/certs/76242.pem"
	I0805 04:35:47.040144    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76242.pem
	I0805 04:35:47.041483    9720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:23 /usr/share/ca-certificates/76242.pem
	I0805 04:35:47.041502    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76242.pem
	I0805 04:35:47.043355    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76242.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 04:35:47.045993    9720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 04:35:47.047523    9720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 04:35:47.049167    9720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 04:35:47.051066    9720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 04:35:47.052781    9720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 04:35:47.054917    9720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 04:35:47.056722    9720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 04:35:47.058511    9720 kubeadm.go:392] StartCluster: {Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51233 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 04:35:47.058576    9720 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 04:35:47.068241    9720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 04:35:47.071742    9720 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 04:35:47.071747    9720 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 04:35:47.071769    9720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 04:35:47.074518    9720 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 04:35:47.074556    9720 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-763000" does not appear in /Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:35:47.074571    9720 kubeconfig.go:62] /Users/jenkins/minikube-integration/19377-7130/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-763000" cluster setting kubeconfig missing "running-upgrade-763000" context setting]
	I0805 04:35:47.074770    9720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/kubeconfig: {Name:mk9388f295704cbd2679ba0e5c0bb91678f79ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:35:47.075674    9720 kapi.go:59] client config for running-upgrade-763000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/client.key", CAFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105fe41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 04:35:47.076515    9720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 04:35:47.079546    9720 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-763000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0805 04:35:47.079553    9720 kubeadm.go:1160] stopping kube-system containers ...
	I0805 04:35:47.079590    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 04:35:47.090328    9720 docker.go:483] Stopping containers: [0b6adf327e2b 5eb0a19d8864 07d00f86e018 ba2510eb9fe9 571fe6bf4cec 38ba4461286e 98ed5c4adbd8 2ecf263175c1 ab095ace8ff8 c273dc83bd70 72bf27654481 3d51b4d0c5d7 1abf73ca754a]
	I0805 04:35:47.090392    9720 ssh_runner.go:195] Run: docker stop 0b6adf327e2b 5eb0a19d8864 07d00f86e018 ba2510eb9fe9 571fe6bf4cec 38ba4461286e 98ed5c4adbd8 2ecf263175c1 ab095ace8ff8 c273dc83bd70 72bf27654481 3d51b4d0c5d7 1abf73ca754a
	I0805 04:35:47.101623    9720 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 04:35:47.196240    9720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 04:35:47.200146    9720 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug  5 11:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug  5 11:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug  5 11:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug  5 11:35 /etc/kubernetes/scheduler.conf
	
	I0805 04:35:47.200175    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/admin.conf
	I0805 04:35:47.203462    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 04:35:47.203504    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 04:35:47.206852    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/kubelet.conf
	I0805 04:35:47.209774    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 04:35:47.209798    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 04:35:47.212621    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/controller-manager.conf
	I0805 04:35:47.215314    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 04:35:47.215333    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 04:35:47.218488    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/scheduler.conf
	I0805 04:35:47.221352    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 04:35:47.221378    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 04:35:47.223863    9720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 04:35:47.226998    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:35:47.247441    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:35:47.647256    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:35:47.850193    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:35:47.872968    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:35:47.904923    9720 api_server.go:52] waiting for apiserver process to appear ...
	I0805 04:35:47.904993    9720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:35:48.407204    9720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:35:48.907070    9720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:35:48.911090    9720 api_server.go:72] duration metric: took 1.00615975s to wait for apiserver process to appear ...
	I0805 04:35:48.911099    9720 api_server.go:88] waiting for apiserver healthz status ...
	I0805 04:35:48.911108    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:35:53.913316    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:35:53.913352    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:35:58.913853    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:35:58.913942    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:03.914811    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:03.914833    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:08.915591    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:08.915706    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:13.917126    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:13.917205    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:18.918981    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:18.919068    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:23.921336    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:23.921405    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:28.924035    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:28.924111    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:33.926816    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:33.926891    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:38.929516    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:38.929588    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:43.932275    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:43.932345    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:48.933902    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:48.934391    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:36:48.974791    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:36:48.974921    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:36:48.996179    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:36:48.996281    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:36:49.013113    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:36:49.013186    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:36:49.026283    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:36:49.026354    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:36:49.037156    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:36:49.037215    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:36:49.047766    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:36:49.047824    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:36:49.057966    9720 logs.go:276] 0 containers: []
	W0805 04:36:49.057975    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:36:49.058024    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:36:49.074198    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:36:49.074222    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:36:49.074228    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:36:49.088691    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:36:49.088702    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:36:49.102882    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:36:49.102891    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:36:49.117942    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:36:49.117956    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:36:49.132592    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:36:49.132604    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:36:49.144610    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:36:49.144625    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:36:49.164798    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:36:49.164807    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:36:49.191683    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:36:49.191692    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:36:49.228842    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:36:49.228850    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:36:49.240703    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:36:49.240714    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:36:49.254933    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:36:49.254945    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:36:49.266877    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:36:49.266886    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:36:49.271728    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:36:49.271738    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:36:49.342613    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:36:49.342626    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:36:49.386584    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:36:49.386595    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:36:49.397451    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:36:49.397461    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:36:49.413659    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:36:49.413668    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:36:51.926271    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:36:56.929149    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:36:56.929557    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:36:56.968634    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:36:56.968776    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:36:56.992861    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:36:56.992959    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:36:57.007529    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:36:57.007607    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:36:57.019702    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:36:57.019761    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:36:57.030174    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:36:57.030245    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:36:57.040552    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:36:57.040631    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:36:57.050758    9720 logs.go:276] 0 containers: []
	W0805 04:36:57.050770    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:36:57.050827    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:36:57.069397    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:36:57.069413    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:36:57.069418    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:36:57.083673    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:36:57.083684    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:36:57.094830    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:36:57.094841    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:36:57.121264    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:36:57.121279    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:36:57.143885    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:36:57.143898    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:36:57.162013    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:36:57.162025    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:36:57.166548    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:36:57.166553    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:36:57.181416    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:36:57.181429    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:36:57.218985    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:36:57.218995    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:36:57.234027    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:36:57.234038    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:36:57.245902    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:36:57.245914    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:36:57.284280    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:36:57.284289    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:36:57.296083    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:36:57.296097    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:36:57.312475    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:36:57.312486    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:36:57.326134    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:36:57.326147    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:36:57.363638    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:36:57.363649    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:36:57.378813    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:36:57.378823    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:36:59.892089    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:37:04.894621    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:37:04.895041    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:37:04.932482    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:37:04.932608    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:37:04.957832    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:37:04.957908    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:37:04.979977    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:37:04.980046    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:37:04.990378    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:37:04.990438    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:37:05.001035    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:37:05.001097    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:37:05.012168    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:37:05.012230    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:37:05.023885    9720 logs.go:276] 0 containers: []
	W0805 04:37:05.023895    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:37:05.023945    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:37:05.033869    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:37:05.033887    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:37:05.033892    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:37:05.048430    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:37:05.048443    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:37:05.064235    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:37:05.064246    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:37:05.076525    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:37:05.076535    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:37:05.088112    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:37:05.088121    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:37:05.092373    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:37:05.092382    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:37:05.129194    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:37:05.129207    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:37:05.143675    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:37:05.143686    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:37:05.155391    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:37:05.155400    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:37:05.167295    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:37:05.167306    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:37:05.178767    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:37:05.178779    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:37:05.203651    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:37:05.203662    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:37:05.217452    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:37:05.217462    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:37:05.242467    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:37:05.242481    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:37:05.277306    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:37:05.277321    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:37:05.295373    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:37:05.295385    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:37:05.332440    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:37:05.332452    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:37:07.851958    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:37:12.854602    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:37:12.855007    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:37:12.897367    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:37:12.897486    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:37:12.916046    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:37:12.916124    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:37:12.930122    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:37:12.930192    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:37:12.941519    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:37:12.941583    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:37:12.953609    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:37:12.953677    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:37:12.964286    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:37:12.964344    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:37:12.974243    9720 logs.go:276] 0 containers: []
	W0805 04:37:12.974254    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:37:12.974304    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:37:12.993261    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:37:12.993284    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:37:12.993290    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:37:13.008477    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:37:13.008490    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:37:13.027712    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:37:13.027725    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:37:13.062728    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:37:13.062741    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:37:13.074443    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:37:13.074455    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:37:13.085831    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:37:13.085844    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:37:13.109666    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:37:13.109674    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:37:13.121101    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:37:13.121114    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:37:13.125595    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:37:13.125603    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:37:13.139411    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:37:13.139420    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:37:13.159107    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:37:13.159119    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:37:13.170694    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:37:13.170707    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:37:13.188478    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:37:13.188489    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:37:13.199608    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:37:13.199618    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:37:13.237408    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:37:13.237422    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:37:13.278218    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:37:13.278228    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:37:13.292442    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:37:13.292451    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:37:15.810338    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:37:20.813346    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:37:20.813761    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:37:20.855078    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:37:20.855219    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:37:20.876630    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:37:20.876740    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:37:20.891575    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:37:20.891650    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:37:20.904305    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:37:20.904386    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:37:20.914906    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:37:20.914963    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:37:20.926073    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:37:20.926130    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:37:20.936735    9720 logs.go:276] 0 containers: []
	W0805 04:37:20.936746    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:37:20.936804    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:37:20.946987    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:37:20.947007    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:37:20.947013    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:37:20.982510    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:37:20.982521    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:37:21.022034    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:37:21.022043    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:37:21.035766    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:37:21.035777    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:37:21.048202    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:37:21.048214    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:37:21.059751    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:37:21.059762    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:37:21.064121    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:37:21.064129    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:37:21.078746    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:37:21.078757    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:37:21.091086    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:37:21.091097    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:37:21.102702    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:37:21.102713    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:37:21.114382    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:37:21.114391    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:37:21.126115    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:37:21.126126    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:37:21.141664    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:37:21.141674    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:37:21.158945    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:37:21.158956    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:37:21.194204    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:37:21.194217    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:37:21.208295    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:37:21.208304    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:37:21.222346    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:37:21.222357    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:37:23.750474    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:37:28.753426    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:37:28.753870    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:37:28.791664    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:37:28.791789    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:37:28.811916    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:37:28.812035    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:37:28.827076    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:37:28.827140    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:37:28.839433    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:37:28.839501    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:37:28.850150    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:37:28.850212    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:37:28.866002    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:37:28.866064    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:37:28.876222    9720 logs.go:276] 0 containers: []
	W0805 04:37:28.876234    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:37:28.876290    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:37:28.886400    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:37:28.886417    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:37:28.886422    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:37:28.900455    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:37:28.900469    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:37:28.904678    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:37:28.904684    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:37:28.939934    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:37:28.939946    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:37:28.955853    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:37:28.955864    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:37:28.979851    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:37:28.979858    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:37:28.994571    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:37:28.994587    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:37:29.006127    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:37:29.006138    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:37:29.017383    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:37:29.017394    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:37:29.037057    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:37:29.037068    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:37:29.050082    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:37:29.050105    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:37:29.061228    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:37:29.061240    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:37:29.073030    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:37:29.073040    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:37:29.085353    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:37:29.085368    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:37:29.123160    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:37:29.123170    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:37:29.137481    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:37:29.137492    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:37:29.171955    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:37:29.171964    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:37:31.691265    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:37:36.693582    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:37:36.693973    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:37:36.726630    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:37:36.726749    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:37:36.747206    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:37:36.747286    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:37:36.761765    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:37:36.761840    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:37:36.774264    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:37:36.774318    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:37:36.786718    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:37:36.786793    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:37:36.798644    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:37:36.798698    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:37:36.809386    9720 logs.go:276] 0 containers: []
	W0805 04:37:36.809397    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:37:36.809446    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:37:36.822798    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:37:36.822815    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:37:36.822820    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:37:36.862002    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:37:36.862018    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:37:36.877637    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:37:36.877648    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:37:36.897225    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:37:36.897235    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:37:36.911374    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:37:36.911382    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:37:36.948207    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:37:36.948216    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:37:36.962061    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:37:36.962070    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:37:36.976536    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:37:36.976544    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:37:36.988212    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:37:36.988221    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:37:36.992564    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:37:36.992570    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:37:37.006541    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:37:37.006551    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:37:37.018665    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:37:37.018674    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:37:37.031178    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:37:37.031190    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:37:37.071653    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:37:37.071673    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:37:37.087895    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:37:37.087915    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:37:37.102330    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:37:37.102343    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:37:37.118118    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:37:37.118134    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:37:39.645977    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:37:44.648832    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:37:44.648903    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:37:44.660426    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:37:44.660489    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:37:44.671484    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:37:44.671545    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:37:44.682125    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:37:44.682204    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:37:44.692933    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:37:44.692990    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:37:44.704190    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:37:44.704254    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:37:44.714789    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:37:44.714831    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:37:44.726643    9720 logs.go:276] 0 containers: []
	W0805 04:37:44.726654    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:37:44.726687    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:37:44.737728    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:37:44.737741    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:37:44.737746    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:37:44.742002    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:37:44.742009    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:37:44.761004    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:37:44.761015    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:37:44.777301    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:37:44.777311    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:37:44.790536    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:37:44.790550    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:37:44.828210    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:37:44.828220    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:37:44.839979    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:37:44.839990    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:37:44.851302    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:37:44.851312    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:37:44.862292    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:37:44.862305    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:37:44.876070    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:37:44.876083    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:37:44.900617    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:37:44.900627    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:37:44.936544    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:37:44.936552    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:37:44.952095    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:37:44.952107    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:37:44.966092    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:37:44.966103    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:37:44.980569    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:37:44.980579    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:37:45.015299    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:37:45.015310    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:37:45.027246    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:37:45.027256    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:37:47.545546    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:37:52.546460    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:37:52.546875    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:37:52.582042    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:37:52.582168    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:37:52.603116    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:37:52.603201    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:37:52.617335    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:37:52.617399    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:37:52.629495    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:37:52.629560    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:37:52.640421    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:37:52.640476    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:37:52.653480    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:37:52.653536    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:37:52.663938    9720 logs.go:276] 0 containers: []
	W0805 04:37:52.663952    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:37:52.664005    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:37:52.676103    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:37:52.676126    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:37:52.676132    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:37:52.690331    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:37:52.690341    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:37:52.702173    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:37:52.702186    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:37:52.714500    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:37:52.714513    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:37:52.749579    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:37:52.749588    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:37:52.753713    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:37:52.753721    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:37:52.787670    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:37:52.787679    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:37:52.825707    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:37:52.825717    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:37:52.839856    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:37:52.839867    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:37:52.854220    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:37:52.854232    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:37:52.865823    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:37:52.865832    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:37:52.889756    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:37:52.889763    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:37:52.901570    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:37:52.901580    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:37:52.919223    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:37:52.919234    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:37:52.936672    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:37:52.936684    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:37:52.947821    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:37:52.947830    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:37:52.959454    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:37:52.959466    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:37:55.474480    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:38:00.476938    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:38:00.477442    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:38:00.519729    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:38:00.519860    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:38:00.540995    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:38:00.541111    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:38:00.559929    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:38:00.559999    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:38:00.572290    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:38:00.572346    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:38:00.583430    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:38:00.583495    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:38:00.593860    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:38:00.593912    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:38:00.604363    9720 logs.go:276] 0 containers: []
	W0805 04:38:00.604374    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:38:00.604426    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:38:00.614644    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:38:00.614663    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:38:00.614669    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:38:00.632911    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:38:00.632924    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:38:00.647243    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:38:00.647255    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:38:00.660758    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:38:00.660770    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:38:00.675746    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:38:00.675759    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:38:00.696020    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:38:00.696035    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:38:00.708035    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:38:00.708052    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:38:00.751008    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:38:00.751021    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:38:00.765572    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:38:00.765585    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:38:00.776466    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:38:00.776476    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:38:00.788779    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:38:00.788789    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:38:00.805867    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:38:00.805877    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:38:00.817205    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:38:00.817217    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:38:00.828222    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:38:00.828233    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:38:00.851964    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:38:00.851971    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:38:00.886608    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:38:00.886615    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:38:00.890661    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:38:00.890667    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:38:03.429523    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:38:08.431476    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:38:08.431593    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:38:08.443651    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:38:08.443715    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:38:08.455735    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:38:08.455810    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:38:08.467964    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:38:08.468031    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:38:08.478772    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:38:08.478839    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:38:08.489287    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:38:08.489343    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:38:08.500318    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:38:08.500371    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:38:08.510826    9720 logs.go:276] 0 containers: []
	W0805 04:38:08.510840    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:38:08.510902    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:38:08.521494    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:38:08.521513    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:38:08.521519    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:38:08.538742    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:38:08.538758    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:38:08.550811    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:38:08.550823    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:38:08.569515    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:38:08.569525    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:38:08.574398    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:38:08.574405    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:38:08.586170    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:38:08.586182    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:38:08.598284    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:38:08.598294    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:38:08.609874    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:38:08.609884    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:38:08.648352    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:38:08.648369    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:38:08.690733    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:38:08.690746    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:38:08.707049    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:38:08.707091    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:38:08.721110    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:38:08.721123    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:38:08.747521    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:38:08.747544    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:38:08.766409    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:38:08.766430    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:38:08.807297    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:38:08.807318    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:38:08.823316    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:38:08.823335    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:38:08.841150    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:38:08.841169    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:38:11.359021    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:38:16.361512    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:38:16.361937    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:38:16.402006    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:38:16.402135    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:38:16.422368    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:38:16.422465    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:38:16.437376    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:38:16.437456    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:38:16.450014    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:38:16.450076    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:38:16.460747    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:38:16.460805    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:38:16.471112    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:38:16.471179    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:38:16.481338    9720 logs.go:276] 0 containers: []
	W0805 04:38:16.481349    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:38:16.481394    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:38:16.492067    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:38:16.492086    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:38:16.492091    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:38:16.506332    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:38:16.506344    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:38:16.517732    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:38:16.517744    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:38:16.543198    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:38:16.543206    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:38:16.547219    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:38:16.547226    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:38:16.582208    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:38:16.582226    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:38:16.597249    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:38:16.597262    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:38:16.612355    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:38:16.612366    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:38:16.655302    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:38:16.655311    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:38:16.670368    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:38:16.670377    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:38:16.681538    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:38:16.681550    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:38:16.694453    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:38:16.694467    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:38:16.731161    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:38:16.731177    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:38:16.744906    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:38:16.744914    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:38:16.756562    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:38:16.756571    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:38:16.773424    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:38:16.773434    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:38:16.785687    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:38:16.785698    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:38:19.297743    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:38:24.300001    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:38:24.300130    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:38:24.311662    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:38:24.311732    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:38:24.324945    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:38:24.325022    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:38:24.335795    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:38:24.335858    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:38:24.355569    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:38:24.355639    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:38:24.366530    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:38:24.366599    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:38:24.380653    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:38:24.380720    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:38:24.391758    9720 logs.go:276] 0 containers: []
	W0805 04:38:24.391769    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:38:24.391825    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:38:24.402222    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:38:24.402239    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:38:24.402244    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:38:24.414134    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:38:24.414145    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:38:24.428213    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:38:24.428224    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:38:24.453779    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:38:24.453788    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:38:24.468114    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:38:24.468125    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:38:24.480354    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:38:24.480369    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:38:24.494779    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:38:24.494789    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:38:24.507479    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:38:24.507491    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:38:24.511809    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:38:24.511817    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:38:24.547040    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:38:24.547051    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:38:24.563667    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:38:24.563678    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:38:24.575341    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:38:24.575350    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:38:24.590765    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:38:24.590774    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:38:24.603339    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:38:24.603350    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:38:24.620969    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:38:24.620980    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:38:24.632609    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:38:24.632621    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:38:24.669336    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:38:24.669346    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:38:27.211693    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:38:32.214152    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:38:32.214299    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:38:32.230500    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:38:32.230567    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:38:32.241425    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:38:32.241519    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:38:32.252000    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:38:32.252069    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:38:32.262665    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:38:32.262736    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:38:32.274370    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:38:32.274441    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:38:32.285519    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:38:32.285583    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:38:32.295848    9720 logs.go:276] 0 containers: []
	W0805 04:38:32.295861    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:38:32.295916    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:38:32.306245    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:38:32.306263    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:38:32.306269    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:38:32.326149    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:38:32.326162    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:38:32.340856    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:38:32.340867    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:38:32.352192    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:38:32.352203    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:38:32.363155    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:38:32.363163    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:38:32.375022    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:38:32.375033    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:38:32.386777    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:38:32.386788    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:38:32.390899    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:38:32.390907    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:38:32.408582    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:38:32.408593    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:38:32.431491    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:38:32.431498    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:38:32.448344    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:38:32.448354    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:38:32.462561    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:38:32.462573    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:38:32.476573    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:38:32.476588    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:38:32.488068    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:38:32.488077    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:38:32.525192    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:38:32.525206    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:38:32.562569    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:38:32.562579    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:38:32.584174    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:38:32.584185    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:38:35.130657    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:38:40.132699    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:38:40.132811    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:38:40.144769    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:38:40.144856    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:38:40.155610    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:38:40.155683    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:38:40.166460    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:38:40.166532    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:38:40.177174    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:38:40.177238    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:38:40.187845    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:38:40.187913    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:38:40.198455    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:38:40.198524    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:38:40.208916    9720 logs.go:276] 0 containers: []
	W0805 04:38:40.208930    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:38:40.208986    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:38:40.219422    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:38:40.219439    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:38:40.219445    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:38:40.255218    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:38:40.255229    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:38:40.270869    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:38:40.270880    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:38:40.285551    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:38:40.285562    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:38:40.310687    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:38:40.310694    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:38:40.348141    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:38:40.348156    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:38:40.353086    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:38:40.353092    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:38:40.366942    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:38:40.366953    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:38:40.381890    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:38:40.381901    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:38:40.397214    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:38:40.397225    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:38:40.433414    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:38:40.433424    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:38:40.451215    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:38:40.451225    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:38:40.465330    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:38:40.465344    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:38:40.477795    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:38:40.477807    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:38:40.492579    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:38:40.492590    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:38:40.507874    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:38:40.507884    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:38:40.519342    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:38:40.519353    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:38:43.033651    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:38:48.035988    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:38:48.036117    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:38:48.046975    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:38:48.047043    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:38:48.059530    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:38:48.059613    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:38:48.070890    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:38:48.070949    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:38:48.081347    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:38:48.081415    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:38:48.092113    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:38:48.092186    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:38:48.103474    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:38:48.103544    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:38:48.113466    9720 logs.go:276] 0 containers: []
	W0805 04:38:48.113478    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:38:48.113538    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:38:48.124193    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:38:48.124213    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:38:48.124219    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:38:48.135729    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:38:48.135742    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:38:48.172053    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:38:48.172063    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:38:48.183807    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:38:48.183818    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:38:48.198812    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:38:48.198822    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:38:48.212510    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:38:48.212521    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:38:48.224275    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:38:48.224286    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:38:48.228572    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:38:48.228581    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:38:48.264170    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:38:48.264183    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:38:48.276451    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:38:48.276464    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:38:48.287678    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:38:48.287689    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:38:48.301960    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:38:48.301970    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:38:48.318914    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:38:48.318927    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:38:48.342553    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:38:48.342563    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:38:48.378007    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:38:48.378017    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:38:48.392139    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:38:48.392149    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:38:48.406038    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:38:48.406052    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:38:50.918944    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:38:55.920825    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:38:55.921117    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:38:55.947297    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:38:55.947411    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:38:55.966278    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:38:55.966365    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:38:55.979205    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:38:55.979272    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:38:55.990654    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:38:55.990725    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:38:56.009055    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:38:56.009127    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:38:56.027464    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:38:56.027530    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:38:56.041695    9720 logs.go:276] 0 containers: []
	W0805 04:38:56.041709    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:38:56.041765    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:38:56.055015    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:38:56.055033    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:38:56.055039    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:38:56.091633    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:38:56.091643    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:38:56.128131    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:38:56.128141    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:38:56.139770    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:38:56.139781    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:38:56.153780    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:38:56.153791    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:38:56.169652    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:38:56.169663    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:38:56.187403    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:38:56.187414    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:38:56.201970    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:38:56.201981    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:38:56.213651    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:38:56.213662    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:38:56.224999    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:38:56.225009    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:38:56.240639    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:38:56.240651    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:38:56.251734    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:38:56.251745    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:38:56.274874    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:38:56.274884    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:38:56.286595    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:38:56.286605    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:38:56.290919    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:38:56.290929    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:38:56.304534    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:38:56.304545    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:38:56.340706    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:38:56.340717    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:38:58.860022    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:03.862810    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:03.863241    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:03.905554    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:03.905671    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:03.926119    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:03.926205    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:03.941419    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:03.941493    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:03.953974    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:03.954053    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:03.965126    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:03.965198    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:03.975367    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:03.975440    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:03.985225    9720 logs.go:276] 0 containers: []
	W0805 04:39:03.985242    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:03.985321    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:03.998048    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:03.998066    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:03.998071    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:04.012004    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:04.012013    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:04.050178    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:04.050191    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:04.054617    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:04.054624    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:04.073578    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:04.073590    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:04.085292    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:04.085301    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:04.107815    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:04.107822    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:04.119835    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:04.119847    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:04.154480    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:04.154493    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:04.170243    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:04.170253    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:04.185604    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:04.185617    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:04.205157    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:04.205171    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:04.242120    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:04.242133    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:04.253891    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:04.253902    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:04.264849    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:04.264860    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:04.282902    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:04.282915    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:04.297335    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:04.297347    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:06.813147    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:11.815539    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:11.815646    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:11.827667    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:11.827739    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:11.838875    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:11.838937    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:11.850248    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:11.850316    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:11.862032    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:11.862099    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:11.876882    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:11.876950    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:11.888788    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:11.888853    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:11.910534    9720 logs.go:276] 0 containers: []
	W0805 04:39:11.910546    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:11.910611    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:11.921880    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:11.921901    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:11.921907    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:11.959569    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:11.959581    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:11.971439    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:11.971451    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:11.996130    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:11.996148    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:12.009635    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:12.009648    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:12.014200    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:12.014212    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:12.029457    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:12.029469    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:12.042397    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:12.042408    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:12.057698    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:12.057708    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:12.075063    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:12.075074    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:12.087025    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:12.087038    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:12.102167    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:12.102181    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:12.140139    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:12.140159    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:12.178651    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:12.178670    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:12.193792    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:12.193802    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:12.232154    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:12.232169    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:12.245130    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:12.245142    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:14.759345    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:19.761166    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:19.761602    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:19.803656    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:19.803791    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:19.828661    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:19.828755    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:19.844818    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:19.844898    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:19.857484    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:19.857551    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:19.867986    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:19.868051    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:19.878431    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:19.878489    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:19.888540    9720 logs.go:276] 0 containers: []
	W0805 04:39:19.888550    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:19.888598    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:19.901171    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:19.901190    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:19.901196    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:19.916323    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:19.916337    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:19.927929    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:19.927942    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:19.939821    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:19.939833    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:19.944654    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:19.944663    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:19.982779    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:19.982790    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:20.017372    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:20.017381    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:20.030381    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:20.030390    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:20.045943    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:20.045954    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:20.065957    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:20.065968    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:20.079925    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:20.079934    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:20.103605    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:20.103615    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:20.141070    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:20.141079    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:20.152616    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:20.152629    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:20.163856    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:20.163866    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:20.182046    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:20.182059    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:20.195682    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:20.195694    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:22.711641    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:27.713538    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:27.713682    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:27.730847    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:27.730922    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:27.742827    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:27.742898    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:27.755521    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:27.755574    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:27.772186    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:27.772262    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:27.783279    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:27.783328    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:27.795055    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:27.795112    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:27.808433    9720 logs.go:276] 0 containers: []
	W0805 04:39:27.808445    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:27.808497    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:27.820390    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:27.820410    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:27.820415    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:27.840734    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:27.840744    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:27.865420    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:27.865437    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:27.878290    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:27.878301    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:27.882888    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:27.882900    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:27.923583    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:27.923594    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:27.938212    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:27.938223    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:27.962412    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:27.962424    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:27.975141    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:27.975152    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:28.014428    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:28.014444    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:28.029616    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:28.029626    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:28.043048    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:28.043059    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:28.059785    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:28.059796    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:28.072965    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:28.072975    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:28.110538    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:28.110548    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:28.125628    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:28.125639    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:28.136969    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:28.136984    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:30.656586    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:35.658979    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:35.659474    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:35.696901    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:35.697045    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:35.717372    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:35.717467    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:35.732259    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:35.732332    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:35.748076    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:35.748149    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:35.758539    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:35.758605    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:35.769664    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:35.769735    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:35.781093    9720 logs.go:276] 0 containers: []
	W0805 04:39:35.781108    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:35.781161    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:35.798684    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:35.798709    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:35.798715    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:35.811679    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:35.811692    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:35.830889    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:35.830899    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:35.843162    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:35.843174    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:35.859205    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:35.859216    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:35.899084    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:35.899106    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:35.910926    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:35.910945    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:35.915057    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:35.915065    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:35.937347    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:35.937363    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:35.949032    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:35.949046    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:35.967221    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:35.967231    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:35.990845    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:35.990858    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:36.002594    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:36.002608    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:36.037335    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:36.037345    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:36.052213    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:36.052227    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:36.066292    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:36.066302    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:36.104555    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:36.104570    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:38.621539    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:43.623895    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:43.624118    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:43.645697    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:43.645814    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:43.661321    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:43.661391    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:43.673975    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:43.674037    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:43.685201    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:43.685270    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:43.695605    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:43.695662    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:43.705842    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:43.705907    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:43.716423    9720 logs.go:276] 0 containers: []
	W0805 04:39:43.716433    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:43.716485    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:43.727251    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:43.727269    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:43.727274    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:43.741018    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:43.741029    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:43.776792    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:43.776802    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:43.814630    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:43.814643    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:43.828817    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:43.828833    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:43.846289    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:43.846298    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:43.860982    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:43.860992    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:43.872379    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:43.872389    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:43.887809    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:43.887819    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:43.904387    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:43.904402    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:43.916503    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:43.916513    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:43.927885    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:43.927899    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:43.939871    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:43.939885    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:43.977318    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:43.977326    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:43.981815    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:43.981821    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:43.995780    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:43.995795    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:44.013981    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:44.013992    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:46.538341    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:51.540693    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:51.540777    9720 kubeadm.go:597] duration metric: took 4m4.466651709s to restartPrimaryControlPlane
	W0805 04:39:51.540873    9720 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 04:39:51.540910    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 04:39:52.585663    9720 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.044730167s)
	I0805 04:39:52.585723    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 04:39:52.590916    9720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 04:39:52.593830    9720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 04:39:52.596922    9720 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 04:39:52.596929    9720 kubeadm.go:157] found existing configuration files:
	
	I0805 04:39:52.596952    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/admin.conf
	I0805 04:39:52.599734    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 04:39:52.599755    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 04:39:52.602339    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/kubelet.conf
	I0805 04:39:52.605343    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 04:39:52.605364    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 04:39:52.608489    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/controller-manager.conf
	I0805 04:39:52.611139    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 04:39:52.611161    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 04:39:52.613785    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/scheduler.conf
	I0805 04:39:52.616702    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 04:39:52.616722    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 04:39:52.619526    9720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 04:39:52.636661    9720 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 04:39:52.636706    9720 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 04:39:52.684886    9720 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 04:39:52.684947    9720 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 04:39:52.684997    9720 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 04:39:52.733461    9720 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 04:39:52.741459    9720 out.go:204]   - Generating certificates and keys ...
	I0805 04:39:52.741493    9720 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 04:39:52.741531    9720 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 04:39:52.741576    9720 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 04:39:52.741610    9720 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 04:39:52.741651    9720 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 04:39:52.741682    9720 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 04:39:52.741722    9720 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 04:39:52.741759    9720 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 04:39:52.741799    9720 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 04:39:52.741842    9720 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 04:39:52.741871    9720 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 04:39:52.741909    9720 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 04:39:52.930897    9720 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 04:39:53.047113    9720 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 04:39:53.171273    9720 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 04:39:53.218644    9720 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 04:39:53.252428    9720 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 04:39:53.252741    9720 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 04:39:53.252772    9720 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 04:39:53.341628    9720 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 04:39:53.345800    9720 out.go:204]   - Booting up control plane ...
	I0805 04:39:53.345862    9720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 04:39:53.345941    9720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 04:39:53.346096    9720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 04:39:53.346170    9720 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 04:39:53.347543    9720 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 04:39:57.349947    9720 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002304 seconds
	I0805 04:39:57.350052    9720 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 04:39:57.355368    9720 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 04:39:57.873958    9720 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 04:39:57.874356    9720 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-763000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 04:39:58.378547    9720 kubeadm.go:310] [bootstrap-token] Using token: 0ez5dh.g9773038io9n2e5d
	I0805 04:39:58.381472    9720 out.go:204]   - Configuring RBAC rules ...
	I0805 04:39:58.381537    9720 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 04:39:58.381592    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 04:39:58.388562    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 04:39:58.389584    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 04:39:58.390721    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 04:39:58.391743    9720 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 04:39:58.396837    9720 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 04:39:58.579495    9720 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 04:39:58.782105    9720 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 04:39:58.782538    9720 kubeadm.go:310] 
	I0805 04:39:58.782570    9720 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 04:39:58.782575    9720 kubeadm.go:310] 
	I0805 04:39:58.782612    9720 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 04:39:58.782621    9720 kubeadm.go:310] 
	I0805 04:39:58.782633    9720 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 04:39:58.782711    9720 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 04:39:58.782773    9720 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 04:39:58.782795    9720 kubeadm.go:310] 
	I0805 04:39:58.782841    9720 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 04:39:58.782863    9720 kubeadm.go:310] 
	I0805 04:39:58.782892    9720 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 04:39:58.782894    9720 kubeadm.go:310] 
	I0805 04:39:58.782928    9720 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 04:39:58.782981    9720 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 04:39:58.783080    9720 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 04:39:58.783086    9720 kubeadm.go:310] 
	I0805 04:39:58.783134    9720 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 04:39:58.783180    9720 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 04:39:58.783183    9720 kubeadm.go:310] 
	I0805 04:39:58.783232    9720 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0ez5dh.g9773038io9n2e5d \
	I0805 04:39:58.783301    9720 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:00ad0c80a9f7b4b654bf16d7fdaf8cb3872452317480a453e3b9036c421b1809 \
	I0805 04:39:58.783316    9720 kubeadm.go:310] 	--control-plane 
	I0805 04:39:58.783319    9720 kubeadm.go:310] 
	I0805 04:39:58.783363    9720 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 04:39:58.783367    9720 kubeadm.go:310] 
	I0805 04:39:58.783406    9720 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0ez5dh.g9773038io9n2e5d \
	I0805 04:39:58.783460    9720 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:00ad0c80a9f7b4b654bf16d7fdaf8cb3872452317480a453e3b9036c421b1809 
	I0805 04:39:58.783516    9720 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 04:39:58.783524    9720 cni.go:84] Creating CNI manager for ""
	I0805 04:39:58.783531    9720 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:39:58.789252    9720 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 04:39:58.796300    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 04:39:58.799313    9720 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 04:39:58.806718    9720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 04:39:58.806817    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 04:39:58.806818    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-763000 minikube.k8s.io/updated_at=2024_08_05T04_39_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=running-upgrade-763000 minikube.k8s.io/primary=true
	I0805 04:39:58.869699    9720 kubeadm.go:1113] duration metric: took 62.963542ms to wait for elevateKubeSystemPrivileges
	I0805 04:39:58.869754    9720 ops.go:34] apiserver oom_adj: -16
	I0805 04:39:58.869759    9720 kubeadm.go:394] duration metric: took 4m11.808806833s to StartCluster
	I0805 04:39:58.869769    9720 settings.go:142] acquiring lock: {Name:mk4ccaf175b574f554efa4f63e0208c978f3f34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:58.869937    9720 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:39:58.870305    9720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/kubeconfig: {Name:mk9388f295704cbd2679ba0e5c0bb91678f79ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:58.870539    9720 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:39:58.870609    9720 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:39:58.870636    9720 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 04:39:58.870671    9720 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-763000"
	I0805 04:39:58.870678    9720 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-763000"
	I0805 04:39:58.870683    9720 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-763000"
	W0805 04:39:58.870686    9720 addons.go:243] addon storage-provisioner should already be in state true
	I0805 04:39:58.870689    9720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-763000"
	I0805 04:39:58.870698    9720 host.go:66] Checking if "running-upgrade-763000" exists ...
	I0805 04:39:58.871633    9720 kapi.go:59] client config for running-upgrade-763000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/client.key", CAFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105fe41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 04:39:58.871756    9720 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-763000"
	W0805 04:39:58.871761    9720 addons.go:243] addon default-storageclass should already be in state true
	I0805 04:39:58.871767    9720 host.go:66] Checking if "running-upgrade-763000" exists ...
	I0805 04:39:58.875140    9720 out.go:177] * Verifying Kubernetes components...
	I0805 04:39:58.875612    9720 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 04:39:58.878450    9720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 04:39:58.878456    9720 sshutil.go:53] new ssh client: &{IP:localhost Port:51201 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I0805 04:39:58.881159    9720 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:58.884253    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:58.890218    9720 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 04:39:58.890226    9720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 04:39:58.890235    9720 sshutil.go:53] new ssh client: &{IP:localhost Port:51201 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I0805 04:39:58.978063    9720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 04:39:58.983344    9720 api_server.go:52] waiting for apiserver process to appear ...
	I0805 04:39:58.983391    9720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:39:58.987305    9720 api_server.go:72] duration metric: took 116.754625ms to wait for apiserver process to appear ...
	I0805 04:39:58.987313    9720 api_server.go:88] waiting for apiserver healthz status ...
	I0805 04:39:58.987319    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:58.993102    9720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 04:39:59.009124    9720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 04:40:03.989463    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:03.989508    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:08.989858    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:08.989886    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:13.990275    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:13.990307    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:18.991279    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:18.991312    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:23.992020    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:23.992069    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:28.993037    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:28.993084    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 04:40:29.331459    9720 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 04:40:29.337032    9720 out.go:177] * Enabled addons: storage-provisioner
	I0805 04:40:29.343921    9720 addons.go:510] duration metric: took 30.473047166s for enable addons: enabled=[storage-provisioner]
	I0805 04:40:33.994359    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:33.994421    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:38.996389    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:38.996444    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:43.998564    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:43.998608    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:49.000995    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:49.001041    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:54.003425    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:54.003460    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:59.005777    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:59.005863    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:40:59.016785    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:40:59.016863    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:40:59.028054    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:40:59.028132    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:40:59.045783    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:40:59.045868    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:40:59.066924    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:40:59.066990    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:40:59.077490    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:40:59.077557    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:40:59.088008    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:40:59.088069    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:40:59.097977    9720 logs.go:276] 0 containers: []
	W0805 04:40:59.097989    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:40:59.098041    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:40:59.109086    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:40:59.109108    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:40:59.109114    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:40:59.123100    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:40:59.123110    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:40:59.134579    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:40:59.134589    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:40:59.146164    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:40:59.146174    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:40:59.157102    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:40:59.157113    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:40:59.181480    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:40:59.181488    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:40:59.185948    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:40:59.185954    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:40:59.224060    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:40:59.224072    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:40:59.235544    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:40:59.235554    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:40:59.250949    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:40:59.250960    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:40:59.268720    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:40:59.268731    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:40:59.281494    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:40:59.281508    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:40:59.320333    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:40:59.320343    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:01.836241    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:06.839124    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:06.839239    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:06.852323    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:06.852394    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:06.863464    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:06.863529    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:06.874521    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:06.874588    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:06.885446    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:06.885515    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:06.896063    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:06.896130    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:06.906911    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:06.906975    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:06.918188    9720 logs.go:276] 0 containers: []
	W0805 04:41:06.918199    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:06.918254    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:06.928713    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:06.928730    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:06.928735    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:06.941129    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:06.941139    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:06.956232    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:06.956242    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:06.968216    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:06.968231    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:06.987333    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:06.987344    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:07.012338    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:07.012346    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:07.027161    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:07.027172    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:07.064782    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:07.064789    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:07.076467    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:07.076481    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:07.091230    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:07.091240    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:07.104813    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:07.104823    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:07.120049    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:07.120058    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:07.124230    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:07.124238    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:09.664797    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:14.667171    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:14.667365    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:14.683659    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:14.683740    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:14.696146    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:14.696218    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:14.707831    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:14.707898    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:14.718031    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:14.718094    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:14.728118    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:14.728190    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:14.738151    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:14.738214    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:14.748430    9720 logs.go:276] 0 containers: []
	W0805 04:41:14.748441    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:14.748489    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:14.758718    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:14.758733    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:14.758739    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:14.771276    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:14.771288    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:14.783892    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:14.783903    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:14.807196    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:14.807203    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:14.811414    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:14.811421    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:14.851291    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:14.851302    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:14.866296    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:14.866312    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:14.881664    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:14.881680    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:14.904384    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:14.904398    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:14.917221    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:14.917239    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:14.956202    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:14.956215    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:14.970576    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:14.970586    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:14.982867    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:14.982878    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:17.501896    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:22.504218    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:22.504400    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:22.529430    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:22.529543    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:22.545780    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:22.545849    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:22.558623    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:22.558696    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:22.569625    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:22.569689    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:22.580020    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:22.580086    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:22.590444    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:22.590510    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:22.601944    9720 logs.go:276] 0 containers: []
	W0805 04:41:22.601960    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:22.602017    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:22.612819    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:22.612834    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:22.612839    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:22.649519    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:22.649532    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:22.665260    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:22.665273    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:22.678268    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:22.678277    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:22.699357    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:22.699371    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:22.715227    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:22.715239    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:22.727743    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:22.727755    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:22.750830    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:22.750837    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:22.786779    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:22.786787    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:22.791140    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:22.791146    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:22.808184    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:22.808198    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:22.819487    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:22.819496    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:22.830907    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:22.830922    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:25.344216    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:30.346657    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:30.346815    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:30.363416    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:30.363505    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:30.379632    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:30.379702    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:30.390613    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:30.390685    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:30.401093    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:30.401158    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:30.411618    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:30.411684    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:30.424818    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:30.424891    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:30.436199    9720 logs.go:276] 0 containers: []
	W0805 04:41:30.436210    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:30.436267    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:30.446435    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:30.446449    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:30.446455    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:30.461041    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:30.461050    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:30.476812    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:30.476822    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:30.488185    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:30.488194    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:30.493373    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:30.493380    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:30.528952    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:30.528962    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:30.547534    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:30.547544    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:30.559723    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:30.559737    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:30.571739    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:30.571753    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:30.583342    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:30.583352    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:30.601367    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:30.601380    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:30.626318    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:30.626326    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:30.664478    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:30.664489    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:33.178889    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:38.181319    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:38.181549    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:38.208567    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:38.208679    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:38.226088    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:38.226166    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:38.240298    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:38.240373    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:38.251685    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:38.251747    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:38.261682    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:38.261746    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:38.272262    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:38.272322    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:38.282709    9720 logs.go:276] 0 containers: []
	W0805 04:41:38.282722    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:38.282776    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:38.293406    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:38.293425    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:38.293430    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:38.305514    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:38.305524    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:38.330469    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:38.330478    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:38.368803    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:38.368811    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:38.373469    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:38.373476    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:38.386950    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:38.386960    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:38.398605    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:38.398615    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:38.409919    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:38.409928    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:38.424948    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:38.424958    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:38.461144    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:38.461156    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:38.476094    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:38.476103    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:38.488030    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:38.488040    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:38.505711    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:38.505725    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:41.022137    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:46.024593    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:46.024839    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:46.049689    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:46.049797    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:46.067433    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:46.067504    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:46.080456    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:46.080521    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:46.092143    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:46.092212    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:46.102675    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:46.102744    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:46.113373    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:46.113439    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:46.127696    9720 logs.go:276] 0 containers: []
	W0805 04:41:46.127706    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:46.127760    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:46.138077    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:46.138092    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:46.138097    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:46.151000    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:46.151013    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:46.163334    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:46.163344    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:46.180764    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:46.180774    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:46.204686    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:46.204698    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:46.241747    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:46.241756    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:46.255272    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:46.255283    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:46.267132    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:46.267145    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:46.282272    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:46.282284    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:46.293906    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:46.293917    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:46.307758    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:46.307769    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:46.312300    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:46.312308    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:46.353386    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:46.353401    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:48.870715    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:53.873004    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:53.873252    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:53.898125    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:53.898239    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:53.915563    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:53.915648    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:53.928384    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:53.928454    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:53.939618    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:53.939684    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:53.954975    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:53.955046    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:53.965233    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:53.965298    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:53.975163    9720 logs.go:276] 0 containers: []
	W0805 04:41:53.975176    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:53.975230    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:53.985168    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:53.985185    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:53.985192    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:54.027840    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:54.027852    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:54.042172    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:54.042183    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:54.054007    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:54.054017    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:54.069416    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:54.069425    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:54.108065    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:54.108074    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:54.112573    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:54.112581    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:54.124411    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:54.124422    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:54.145328    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:54.145338    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:54.156980    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:54.156991    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:54.182712    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:54.182725    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:54.195913    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:54.195924    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:54.211077    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:54.211088    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:56.724832    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:01.727197    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:01.727412    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:01.749717    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:01.749821    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:01.765865    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:01.765944    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:01.778638    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:01.778707    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:01.789659    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:01.789720    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:01.803183    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:01.803249    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:01.813741    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:01.813811    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:01.824275    9720 logs.go:276] 0 containers: []
	W0805 04:42:01.824285    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:01.824339    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:01.834715    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:01.834734    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:01.834739    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:01.849420    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:01.849431    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:01.864964    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:01.864975    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:01.889601    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:01.889612    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:01.902133    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:01.902147    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:01.938925    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:01.938933    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:01.950816    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:01.950848    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:01.964550    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:01.964560    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:01.976241    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:01.976253    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:01.990744    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:01.990755    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:02.002141    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:02.002156    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:02.014641    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:02.014651    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:02.026206    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:02.026218    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:02.049186    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:02.049196    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:02.053990    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:02.053999    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:04.590489    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:09.591239    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:09.591456    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:09.615876    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:09.615979    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:09.631989    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:09.632075    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:09.645538    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:09.645625    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:09.662877    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:09.662946    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:09.673364    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:09.673431    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:09.683846    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:09.683907    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:09.694231    9720 logs.go:276] 0 containers: []
	W0805 04:42:09.694240    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:09.694289    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:09.704752    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:09.704773    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:09.704778    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:09.709732    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:09.709741    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:09.723811    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:09.723820    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:09.735759    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:09.735769    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:09.748184    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:09.748195    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:09.759573    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:09.759583    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:09.783669    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:09.783676    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:09.819342    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:09.819349    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:09.833971    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:09.833981    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:09.845761    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:09.845771    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:09.857406    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:09.857417    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:09.893961    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:09.893971    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:09.905669    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:09.905682    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:09.916964    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:09.916975    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:09.932299    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:09.932312    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:12.451845    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:17.454119    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:17.454195    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:17.466009    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:17.466074    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:17.480739    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:17.480806    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:17.491419    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:17.491495    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:17.501723    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:17.501792    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:17.513759    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:17.513824    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:17.524766    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:17.524827    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:17.535877    9720 logs.go:276] 0 containers: []
	W0805 04:42:17.535891    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:17.535943    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:17.551012    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:17.551029    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:17.551034    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:17.555561    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:17.555568    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:17.571524    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:17.571541    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:17.586053    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:17.586066    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:17.601629    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:17.601640    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:17.616907    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:17.616922    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:17.628735    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:17.628748    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:17.647031    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:17.647041    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:17.672335    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:17.672345    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:17.687076    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:17.687087    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:17.698704    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:17.698718    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:17.710396    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:17.710409    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:17.722125    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:17.722135    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:17.759757    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:17.759767    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:17.831740    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:17.831751    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:20.345739    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:25.348384    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:25.348535    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:25.362854    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:25.362938    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:25.377276    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:25.377345    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:25.390657    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:25.390724    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:25.400991    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:25.401061    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:25.411265    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:25.411329    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:25.421791    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:25.421860    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:25.431643    9720 logs.go:276] 0 containers: []
	W0805 04:42:25.431655    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:25.431709    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:25.442670    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:25.442686    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:25.442692    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:25.454308    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:25.454319    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:25.466640    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:25.466652    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:25.486305    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:25.486315    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:25.511271    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:25.511281    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:25.523251    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:25.523263    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:25.561245    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:25.561256    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:25.575330    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:25.575342    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:25.588618    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:25.588633    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:25.600031    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:25.600045    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:25.605120    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:25.605127    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:25.619028    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:25.619037    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:25.646405    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:25.646415    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:25.684470    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:25.684484    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:25.696708    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:25.696722    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:28.216464    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:33.218971    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:33.219158    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:33.241799    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:33.241925    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:33.257178    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:33.257272    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:33.270172    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:33.270243    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:33.282501    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:33.282562    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:33.293437    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:33.293500    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:33.307825    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:33.307891    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:33.318314    9720 logs.go:276] 0 containers: []
	W0805 04:42:33.318324    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:33.318376    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:33.328799    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:33.328814    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:33.328818    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:33.333346    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:33.333351    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:33.345081    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:33.345093    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:33.357064    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:33.357073    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:33.382286    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:33.382296    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:33.398200    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:33.398213    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:33.410301    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:33.410310    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:33.421727    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:33.421740    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:33.432928    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:33.432937    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:33.471554    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:33.471564    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:33.509183    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:33.509196    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:33.526860    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:33.526870    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:33.550750    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:33.550758    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:33.562115    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:33.562128    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:33.577620    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:33.577630    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:36.097438    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:41.100010    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:41.100306    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:41.134509    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:41.134641    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:41.153918    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:41.154010    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:41.168791    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:41.168875    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:41.180909    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:41.180977    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:41.191721    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:41.191788    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:41.205721    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:41.205792    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:41.217368    9720 logs.go:276] 0 containers: []
	W0805 04:42:41.217381    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:41.217439    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:41.227998    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:41.228019    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:41.228025    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:41.243509    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:41.243519    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:41.266172    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:41.266186    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:41.281759    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:41.281772    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:41.309176    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:41.309190    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:41.323568    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:41.323577    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:41.358156    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:41.358166    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:41.372342    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:41.372353    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:41.384392    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:41.384404    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:41.422238    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:41.422253    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:41.426561    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:41.426568    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:41.440655    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:41.440664    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:41.452864    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:41.452876    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:41.465371    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:41.465386    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:41.479144    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:41.479155    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:44.008793    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:49.010750    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:49.010829    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:49.021383    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:49.021450    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:49.032219    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:49.032279    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:49.043186    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:49.043255    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:49.053733    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:49.053800    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:49.064172    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:49.064236    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:49.074723    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:49.074783    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:49.084573    9720 logs.go:276] 0 containers: []
	W0805 04:42:49.084586    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:49.084637    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:49.095415    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:49.095437    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:49.095445    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:49.108048    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:49.108058    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:49.125874    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:49.125884    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:49.140201    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:49.140214    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:49.155403    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:49.155413    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:49.169477    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:49.169486    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:49.181380    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:49.181390    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:49.192929    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:49.192941    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:49.204632    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:49.204643    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:49.239680    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:49.239690    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:49.244164    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:49.244174    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:49.260428    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:49.260442    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:49.271938    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:49.271948    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:49.283736    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:49.283748    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:49.307038    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:49.307046    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:51.845350    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:56.848105    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:56.848278    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:56.880326    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:56.880439    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:56.898993    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:56.899076    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:56.912069    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:56.912144    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:56.923683    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:56.923750    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:56.934386    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:56.934450    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:56.944673    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:56.944735    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:56.955437    9720 logs.go:276] 0 containers: []
	W0805 04:42:56.955448    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:56.955505    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:56.966859    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:56.966878    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:56.966884    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:57.005970    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:57.005978    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:57.041185    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:57.041195    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:57.064181    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:57.064187    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:57.079033    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:57.079047    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:57.090735    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:57.090744    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:57.108710    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:57.108720    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:57.123206    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:57.123216    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:57.134781    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:57.134792    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:57.139890    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:57.139897    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:57.157046    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:57.157060    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:57.171278    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:57.171288    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:57.182852    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:57.182863    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:57.213613    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:57.213624    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:57.233920    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:57.233930    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:59.747239    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:04.749614    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:04.749750    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:04.760333    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:04.760409    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:04.771825    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:04.771895    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:04.783620    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:04.783688    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:04.799013    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:04.799088    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:04.809945    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:04.810010    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:04.820516    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:04.820576    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:04.830280    9720 logs.go:276] 0 containers: []
	W0805 04:43:04.830292    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:04.830349    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:04.840687    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:04.840705    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:04.840710    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:04.854517    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:04.854528    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:04.872715    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:04.872725    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:04.884049    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:04.884059    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:04.895575    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:04.895585    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:04.934474    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:04.934486    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:04.939429    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:04.939438    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:04.954182    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:04.954192    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:04.978317    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:04.978328    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:05.014084    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:05.014097    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:05.025812    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:05.025823    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:05.040685    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:05.040696    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:05.055427    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:05.055437    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:05.066682    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:05.066690    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:05.077804    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:05.077813    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:07.591657    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:12.594007    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:12.594244    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:12.617202    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:12.617295    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:12.633907    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:12.633979    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:12.646494    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:12.646567    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:12.657941    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:12.658009    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:12.669389    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:12.669458    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:12.680467    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:12.680524    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:12.690667    9720 logs.go:276] 0 containers: []
	W0805 04:43:12.690682    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:12.690728    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:12.701072    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:12.701089    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:12.701095    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:12.736523    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:12.736534    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:12.751259    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:12.751271    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:12.756293    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:12.756303    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:12.769191    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:12.769202    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:12.781215    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:12.781226    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:12.794651    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:12.794662    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:12.810375    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:12.810386    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:12.822098    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:12.822108    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:12.833655    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:12.833666    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:12.854113    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:12.854123    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:12.868568    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:12.868577    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:12.880905    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:12.880916    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:12.898779    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:12.898789    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:12.922451    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:12.922459    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:15.461875    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:20.464277    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:20.464570    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:20.498752    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:20.498884    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:20.519430    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:20.519508    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:20.533564    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:20.533643    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:20.545325    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:20.545394    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:20.556360    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:20.556423    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:20.567436    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:20.567493    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:20.577662    9720 logs.go:276] 0 containers: []
	W0805 04:43:20.577678    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:20.577742    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:20.588409    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:20.588449    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:20.588454    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:20.600252    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:20.600266    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:20.614687    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:20.614695    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:20.626622    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:20.626632    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:20.638852    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:20.638865    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:20.651571    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:20.651582    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:20.664234    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:20.664244    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:20.688537    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:20.688549    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:20.713670    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:20.713680    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:20.725882    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:20.725896    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:20.730955    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:20.730964    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:20.767756    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:20.767767    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:20.783334    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:20.783350    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:20.820854    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:20.820871    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:20.835956    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:20.835967    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:23.350298    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:28.352780    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:28.353066    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:28.381009    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:28.381135    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:28.398194    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:28.398276    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:28.411844    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:28.411917    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:28.423518    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:28.423612    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:28.434377    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:28.434446    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:28.445339    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:28.445406    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:28.455369    9720 logs.go:276] 0 containers: []
	W0805 04:43:28.455380    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:28.455439    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:28.472810    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:28.472827    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:28.472832    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:28.485246    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:28.485256    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:28.496901    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:28.496916    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:28.517712    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:28.517724    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:28.533311    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:28.533322    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:28.545672    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:28.545688    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:28.557211    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:28.557226    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:28.593347    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:28.593356    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:28.616057    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:28.616064    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:28.629355    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:28.629364    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:28.644379    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:28.644389    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:28.658297    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:28.658308    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:28.671867    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:28.671878    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:28.676431    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:28.676439    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:28.716532    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:28.716543    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:31.233013    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:36.235445    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:36.235542    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:36.247899    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:36.247986    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:36.259546    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:36.259623    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:36.271536    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:36.271608    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:36.282512    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:36.282631    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:36.294451    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:36.294522    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:36.306713    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:36.306779    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:36.318970    9720 logs.go:276] 0 containers: []
	W0805 04:43:36.318982    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:36.319046    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:36.331062    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:36.331080    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:36.331086    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:36.347551    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:36.347564    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:36.363428    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:36.363444    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:36.377835    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:36.377847    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:36.391702    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:36.391714    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:36.416880    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:36.416898    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:36.458124    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:36.458136    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:36.470883    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:36.470894    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:36.489906    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:36.489918    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:36.502490    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:36.502502    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:36.520277    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:36.520293    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:36.561840    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:36.561860    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:36.567018    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:36.567026    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:36.580894    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:36.580904    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:36.593787    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:36.593798    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:39.109488    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:44.111953    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:44.112093    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:44.128160    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:44.128230    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:44.140469    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:44.140536    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:44.151409    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:44.151483    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:44.161971    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:44.162034    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:44.180436    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:44.180503    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:44.190929    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:44.190999    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:44.201028    9720 logs.go:276] 0 containers: []
	W0805 04:43:44.201040    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:44.201097    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:44.211501    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:44.211521    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:44.211527    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:44.229522    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:44.229533    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:44.241651    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:44.241661    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:44.261815    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:44.261825    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:44.275848    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:44.275859    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:44.290492    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:44.290502    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:44.302159    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:44.302170    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:44.313264    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:44.313274    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:44.328284    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:44.328294    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:44.351124    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:44.351133    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:44.355553    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:44.355559    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:44.391214    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:44.391225    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:44.402836    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:44.402846    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:44.414502    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:44.414514    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:44.426531    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:44.426542    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:46.965581    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:51.968007    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:51.968136    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:51.984276    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:51.984348    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:51.995799    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:51.995873    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:52.006754    9720 logs.go:276] 4 containers: [396dbef8c681 4b67b31cb033 1b9570e90766 ef130aa43104]
	I0805 04:43:52.006826    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:52.017769    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:52.017833    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:52.028209    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:52.028275    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:52.042680    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:52.042745    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:52.054670    9720 logs.go:276] 0 containers: []
	W0805 04:43:52.054683    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:52.054742    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:52.065267    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:52.065285    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:52.065291    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:52.079955    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:52.079965    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:52.092274    9720 logs.go:123] Gathering logs for coredns [396dbef8c681] ...
	I0805 04:43:52.092285    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dbef8c681"
	I0805 04:43:52.105406    9720 logs.go:123] Gathering logs for coredns [4b67b31cb033] ...
	I0805 04:43:52.105418    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b67b31cb033"
	I0805 04:43:52.116772    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:52.116787    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:52.132999    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:52.133011    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:52.145750    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:52.145760    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:52.163139    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:52.163151    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:52.175385    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:52.175396    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:52.214479    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:52.214491    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:52.219191    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:52.219197    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:52.242253    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:52.242261    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:52.255058    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:52.255069    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:52.291546    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:52.291557    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:52.305789    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:52.305799    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:54.820492    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:59.822796    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:59.826283    9720 out.go:177] 
	W0805 04:43:59.831157    9720 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0805 04:43:59.831167    9720 out.go:239] * 
	* 
	W0805 04:43:59.831801    9720 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:43:59.842099    9720 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-763000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-05 04:43:59.93623 -0700 PDT m=+1312.506167751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-763000 -n running-upgrade-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-763000 -n running-upgrade-763000: exit status 2 (15.706215833s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-763000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-992000          | force-systemd-flag-992000 | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-058000              | force-systemd-env-058000  | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-058000           | force-systemd-env-058000  | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT | 05 Aug 24 04:34 PDT |
	| start   | -p docker-flags-390000                | docker-flags-390000       | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-992000             | force-systemd-flag-992000 | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-992000          | force-systemd-flag-992000 | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT | 05 Aug 24 04:34 PDT |
	| start   | -p cert-expiration-871000             | cert-expiration-871000    | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-390000 ssh               | docker-flags-390000       | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-390000 ssh               | docker-flags-390000       | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-390000                | docker-flags-390000       | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT | 05 Aug 24 04:34 PDT |
	| start   | -p cert-options-155000                | cert-options-155000       | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-155000 ssh               | cert-options-155000       | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-155000 -- sudo        | cert-options-155000       | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-155000                | cert-options-155000       | jenkins | v1.33.1 | 05 Aug 24 04:34 PDT | 05 Aug 24 04:34 PDT |
	| start   | -p running-upgrade-763000             | minikube                  | jenkins | v1.26.0 | 05 Aug 24 04:34 PDT | 05 Aug 24 04:35 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-763000             | running-upgrade-763000    | jenkins | v1.33.1 | 05 Aug 24 04:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-871000             | cert-expiration-871000    | jenkins | v1.33.1 | 05 Aug 24 04:37 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-871000             | cert-expiration-871000    | jenkins | v1.33.1 | 05 Aug 24 04:37 PDT | 05 Aug 24 04:37 PDT |
	| start   | -p kubernetes-upgrade-767000          | kubernetes-upgrade-767000 | jenkins | v1.33.1 | 05 Aug 24 04:37 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-767000          | kubernetes-upgrade-767000 | jenkins | v1.33.1 | 05 Aug 24 04:37 PDT | 05 Aug 24 04:37 PDT |
	| start   | -p kubernetes-upgrade-767000          | kubernetes-upgrade-767000 | jenkins | v1.33.1 | 05 Aug 24 04:37 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-767000          | kubernetes-upgrade-767000 | jenkins | v1.33.1 | 05 Aug 24 04:37 PDT | 05 Aug 24 04:37 PDT |
	| start   | -p stopped-upgrade-528000             | minikube                  | jenkins | v1.26.0 | 05 Aug 24 04:37 PDT | 05 Aug 24 04:38 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-528000 stop           | minikube                  | jenkins | v1.26.0 | 05 Aug 24 04:38 PDT | 05 Aug 24 04:38 PDT |
	| start   | -p stopped-upgrade-528000             | stopped-upgrade-528000    | jenkins | v1.33.1 | 05 Aug 24 04:38 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 04:38:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 04:38:59.864504    9870 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:38:59.864666    9870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:38:59.864671    9870 out.go:304] Setting ErrFile to fd 2...
	I0805 04:38:59.864674    9870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:38:59.864856    9870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:38:59.866172    9870 out.go:298] Setting JSON to false
	I0805 04:38:59.887102    9870 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5909,"bootTime":1722852030,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:38:59.887168    9870 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:38:59.891201    9870 out.go:177] * [stopped-upgrade-528000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:38:59.899123    9870 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:38:59.899170    9870 notify.go:220] Checking for updates...
	I0805 04:38:59.904616    9870 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:38:59.908076    9870 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:38:59.911113    9870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:38:59.914120    9870 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:38:59.917072    9870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:38:59.920356    9870 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:38:59.924047    9870 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 04:38:59.927107    9870 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:38:59.931082    9870 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:38:59.938034    9870 start.go:297] selected driver: qemu2
	I0805 04:38:59.938039    9870 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51465 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 04:38:59.938090    9870 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:38:59.940755    9870 cni.go:84] Creating CNI manager for ""
	I0805 04:38:59.940771    9870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:38:59.940794    9870 start.go:340] cluster config:
	{Name:stopped-upgrade-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51465 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 04:38:59.940846    9870 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:38:59.947976    9870 out.go:177] * Starting "stopped-upgrade-528000" primary control-plane node in "stopped-upgrade-528000" cluster
	I0805 04:38:59.952131    9870 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 04:38:59.952154    9870 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0805 04:38:59.952167    9870 cache.go:56] Caching tarball of preloaded images
	I0805 04:38:59.952242    9870 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:38:59.952250    9870 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0805 04:38:59.952311    9870 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/config.json ...
	I0805 04:38:59.952794    9870 start.go:360] acquireMachinesLock for stopped-upgrade-528000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:38:59.952829    9870 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "stopped-upgrade-528000"
	I0805 04:38:59.952837    9870 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:38:59.952842    9870 fix.go:54] fixHost starting: 
	I0805 04:38:59.952952    9870 fix.go:112] recreateIfNeeded on stopped-upgrade-528000: state=Stopped err=<nil>
	W0805 04:38:59.952960    9870 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:38:59.960064    9870 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-528000" ...
	I0805 04:38:58.860022    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:38:59.964093    9870 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:38:59.964177    9870 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51431-:22,hostfwd=tcp::51432-:2376,hostname=stopped-upgrade-528000 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/disk.qcow2
	I0805 04:39:00.012319    9870 main.go:141] libmachine: STDOUT: 
	I0805 04:39:00.012344    9870 main.go:141] libmachine: STDERR: 
	I0805 04:39:00.012350    9870 main.go:141] libmachine: Waiting for VM to start (ssh -p 51431 docker@127.0.0.1)...
	I0805 04:39:03.862810    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:03.863241    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:03.905554    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:03.905671    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:03.926119    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:03.926205    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:03.941419    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:03.941493    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:03.953974    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:03.954053    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:03.965126    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:03.965198    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:03.975367    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:03.975440    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:03.985225    9720 logs.go:276] 0 containers: []
	W0805 04:39:03.985242    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:03.985321    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:03.998048    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:03.998066    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:03.998071    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:04.012004    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:04.012013    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:04.050178    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:04.050191    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:04.054617    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:04.054624    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:04.073578    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:04.073590    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:04.085292    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:04.085301    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:04.107815    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:04.107822    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:04.119835    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:04.119847    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:04.154480    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:04.154493    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:04.170243    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:04.170253    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:04.185604    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:04.185617    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:04.205157    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:04.205171    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:04.242120    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:04.242133    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:04.253891    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:04.253902    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:04.264849    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:04.264860    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:04.282902    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:04.282915    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:04.297335    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:04.297347    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:06.813147    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:11.815539    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:11.815646    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:11.827667    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:11.827739    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:11.838875    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:11.838937    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:11.850248    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:11.850316    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:11.862032    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:11.862099    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:11.876882    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:11.876950    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:11.888788    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:11.888853    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:11.910534    9720 logs.go:276] 0 containers: []
	W0805 04:39:11.910546    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:11.910611    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:11.921880    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:11.921901    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:11.921907    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:11.959569    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:11.959581    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:11.971439    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:11.971451    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:11.996130    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:11.996148    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:12.009635    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:12.009648    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:12.014200    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:12.014212    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:12.029457    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:12.029469    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:12.042397    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:12.042408    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:12.057698    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:12.057708    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:12.075063    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:12.075074    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:12.087025    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:12.087038    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:12.102167    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:12.102181    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:12.140139    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:12.140159    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:12.178651    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:12.178670    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:12.193792    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:12.193802    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:12.232154    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:12.232169    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:12.245130    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:12.245142    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:14.759345    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:19.761166    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:19.761602    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:19.803656    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:19.803791    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:19.828661    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:19.828755    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:19.844818    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:19.844898    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:19.857484    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:19.857551    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:19.867986    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:19.868051    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:19.878431    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:19.878489    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:19.888540    9720 logs.go:276] 0 containers: []
	W0805 04:39:19.888550    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:19.888598    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:19.901171    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:19.901190    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:19.901196    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:19.916323    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:19.916337    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:19.927929    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:19.927942    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:19.939821    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:19.939833    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:19.944654    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:19.944663    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:19.982779    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:19.982790    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:20.017372    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:20.017381    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:20.030381    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:20.030390    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:20.045943    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:20.045954    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:20.065957    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:20.065968    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:20.079925    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:20.079934    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:20.103605    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:20.103615    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:20.141070    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:20.141079    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:20.152616    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:20.152629    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:20.163856    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:20.163866    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:20.182046    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:20.182059    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:20.195682    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:20.195694    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:20.395964    9870 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/config.json ...
	I0805 04:39:20.396388    9870 machine.go:94] provisionDockerMachine start ...
	I0805 04:39:20.396461    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:20.396722    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:20.396729    9870 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 04:39:20.476021    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 04:39:20.476051    9870 buildroot.go:166] provisioning hostname "stopped-upgrade-528000"
	I0805 04:39:20.476117    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:20.476295    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:20.476306    9870 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-528000 && echo "stopped-upgrade-528000" | sudo tee /etc/hostname
	I0805 04:39:20.556223    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-528000
	
	I0805 04:39:20.556281    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:20.556411    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:20.556421    9870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-528000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-528000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-528000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 04:39:20.629414    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 04:39:20.629428    9870 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19377-7130/.minikube CaCertPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19377-7130/.minikube}
	I0805 04:39:20.629435    9870 buildroot.go:174] setting up certificates
	I0805 04:39:20.629439    9870 provision.go:84] configureAuth start
	I0805 04:39:20.629448    9870 provision.go:143] copyHostCerts
	I0805 04:39:20.629551    9870 exec_runner.go:144] found /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.pem, removing ...
	I0805 04:39:20.629558    9870 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.pem
	I0805 04:39:20.629674    9870 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.pem (1078 bytes)
	I0805 04:39:20.629886    9870 exec_runner.go:144] found /Users/jenkins/minikube-integration/19377-7130/.minikube/cert.pem, removing ...
	I0805 04:39:20.629889    9870 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19377-7130/.minikube/cert.pem
	I0805 04:39:20.629949    9870 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19377-7130/.minikube/cert.pem (1123 bytes)
	I0805 04:39:20.630083    9870 exec_runner.go:144] found /Users/jenkins/minikube-integration/19377-7130/.minikube/key.pem, removing ...
	I0805 04:39:20.630086    9870 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19377-7130/.minikube/key.pem
	I0805 04:39:20.630137    9870 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19377-7130/.minikube/key.pem (1675 bytes)
	I0805 04:39:20.630245    9870 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-528000 san=[127.0.0.1 localhost minikube stopped-upgrade-528000]
	I0805 04:39:20.897935    9870 provision.go:177] copyRemoteCerts
	I0805 04:39:20.897988    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 04:39:20.897997    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	I0805 04:39:20.936660    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0805 04:39:20.944029    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0805 04:39:20.951164    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 04:39:20.957642    9870 provision.go:87] duration metric: took 328.195083ms to configureAuth
	I0805 04:39:20.957652    9870 buildroot.go:189] setting minikube options for container-runtime
	I0805 04:39:20.957770    9870 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:39:20.957807    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:20.957909    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:20.957916    9870 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 04:39:21.027437    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 04:39:21.027446    9870 buildroot.go:70] root file system type: tmpfs
	I0805 04:39:21.027491    9870 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 04:39:21.027525    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:21.027624    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:21.027659    9870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 04:39:21.099803    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 04:39:21.099860    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:21.099980    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:21.099988    9870 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 04:39:21.462230    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 04:39:21.462242    9870 machine.go:97] duration metric: took 1.065835583s to provisionDockerMachine
	I0805 04:39:21.462248    9870 start.go:293] postStartSetup for "stopped-upgrade-528000" (driver="qemu2")
	I0805 04:39:21.462255    9870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 04:39:21.462310    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 04:39:21.462319    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	I0805 04:39:21.501138    9870 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 04:39:21.502491    9870 info.go:137] Remote host: Buildroot 2021.02.12
	I0805 04:39:21.502498    9870 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19377-7130/.minikube/addons for local assets ...
	I0805 04:39:21.502608    9870 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19377-7130/.minikube/files for local assets ...
	I0805 04:39:21.502733    9870 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem -> 76242.pem in /etc/ssl/certs
	I0805 04:39:21.502867    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 04:39:21.505229    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem --> /etc/ssl/certs/76242.pem (1708 bytes)
	I0805 04:39:21.512327    9870 start.go:296] duration metric: took 50.073ms for postStartSetup
	I0805 04:39:21.512343    9870 fix.go:56] duration metric: took 21.559292792s for fixHost
	I0805 04:39:21.512378    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:21.512485    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:21.512490    9870 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 04:39:21.581792    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722857961.776913588
	
	I0805 04:39:21.581800    9870 fix.go:216] guest clock: 1722857961.776913588
	I0805 04:39:21.581804    9870 fix.go:229] Guest: 2024-08-05 04:39:21.776913588 -0700 PDT Remote: 2024-08-05 04:39:21.512344 -0700 PDT m=+21.679343001 (delta=264.569588ms)
	I0805 04:39:21.581814    9870 fix.go:200] guest clock delta is within tolerance: 264.569588ms
	I0805 04:39:21.581817    9870 start.go:83] releasing machines lock for "stopped-upgrade-528000", held for 21.628774042s
	I0805 04:39:21.581872    9870 ssh_runner.go:195] Run: cat /version.json
	I0805 04:39:21.581880    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	I0805 04:39:21.582579    9870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 04:39:21.582596    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	W0805 04:39:21.618094    9870 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0805 04:39:21.618144    9870 ssh_runner.go:195] Run: systemctl --version
	I0805 04:39:21.658992    9870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 04:39:21.660596    9870 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 04:39:21.660623    9870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0805 04:39:21.663939    9870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0805 04:39:21.668580    9870 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 04:39:21.668588    9870 start.go:495] detecting cgroup driver to use...
	I0805 04:39:21.668667    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 04:39:21.675752    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0805 04:39:21.679420    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 04:39:21.682305    9870 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 04:39:21.682328    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 04:39:21.685114    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 04:39:21.688232    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 04:39:21.691779    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 04:39:21.695071    9870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 04:39:21.698174    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 04:39:21.700982    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 04:39:21.704223    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 04:39:21.707715    9870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 04:39:21.710545    9870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 04:39:21.713048    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:21.791373    9870 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 04:39:21.798737    9870 start.go:495] detecting cgroup driver to use...
	I0805 04:39:21.798804    9870 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 04:39:21.806200    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 04:39:21.810872    9870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 04:39:21.822723    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 04:39:21.827176    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 04:39:21.831557    9870 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 04:39:21.888182    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 04:39:21.893778    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 04:39:21.899763    9870 ssh_runner.go:195] Run: which cri-dockerd
	I0805 04:39:21.901026    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 04:39:21.904092    9870 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 04:39:21.909208    9870 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 04:39:21.994031    9870 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 04:39:22.081352    9870 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 04:39:22.081421    9870 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 04:39:22.086637    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:22.173899    9870 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 04:39:23.338689    9870 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164760208s)
	I0805 04:39:23.338763    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 04:39:23.343464    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 04:39:23.348692    9870 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 04:39:23.428703    9870 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 04:39:23.511973    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:23.581347    9870 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 04:39:23.587694    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 04:39:23.592214    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:23.676331    9870 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 04:39:23.715138    9870 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 04:39:23.715219    9870 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 04:39:23.718638    9870 start.go:563] Will wait 60s for crictl version
	I0805 04:39:23.718696    9870 ssh_runner.go:195] Run: which crictl
	I0805 04:39:23.720095    9870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 04:39:23.734704    9870 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0805 04:39:23.734764    9870 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 04:39:23.750548    9870 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 04:39:23.770134    9870 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0805 04:39:23.770200    9870 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0805 04:39:23.771545    9870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 04:39:23.775606    9870 kubeadm.go:883] updating cluster {Name:stopped-upgrade-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51465 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0805 04:39:23.775649    9870 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 04:39:23.775688    9870 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 04:39:23.786106    9870 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 04:39:23.786114    9870 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 04:39:23.786162    9870 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 04:39:23.789057    9870 ssh_runner.go:195] Run: which lz4
	I0805 04:39:23.790401    9870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 04:39:23.791600    9870 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 04:39:23.791617    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0805 04:39:24.720212    9870 docker.go:649] duration metric: took 929.83025ms to copy over tarball
	I0805 04:39:24.720268    9870 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 04:39:22.711641    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:25.881342    9870 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.161049041s)
	I0805 04:39:25.881365    9870 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 04:39:25.897169    9870 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 04:39:25.900936    9870 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0805 04:39:25.906377    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:25.984192    9870 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 04:39:27.731354    9870 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.747122333s)
	I0805 04:39:27.731474    9870 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 04:39:27.743162    9870 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 04:39:27.743168    9870 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 04:39:27.743173    9870 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 04:39:27.747770    9870 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:27.749566    9870 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:39:27.751509    9870 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:27.751598    9870 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:39:27.754015    9870 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:39:27.754034    9870 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:39:27.756377    9870 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:39:27.756403    9870 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:39:27.756475    9870 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:39:27.758063    9870 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:39:27.758112    9870 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:39:27.759212    9870 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 04:39:27.759317    9870 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:39:27.759339    9870 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:39:27.760242    9870 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:39:27.761367    9870 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 04:39:28.166286    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:39:28.173899    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:39:28.178055    9870 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0805 04:39:28.178079    9870 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:39:28.178127    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:39:28.180058    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:39:28.188543    9870 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0805 04:39:28.188585    9870 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:39:28.188637    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:39:28.194289    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0805 04:39:28.198322    9870 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0805 04:39:28.198343    9870 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:39:28.198398    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:39:28.208953    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0805 04:39:28.214286    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:39:28.216040    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0805 04:39:28.220300    9870 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 04:39:28.220424    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:39:28.226921    9870 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0805 04:39:28.226946    9870 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:39:28.227003    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:39:28.235237    9870 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0805 04:39:28.235258    9870 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:39:28.235319    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:39:28.242387    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0805 04:39:28.249095    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 04:39:28.249216    9870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0805 04:39:28.251122    9870 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0805 04:39:28.251135    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0805 04:39:28.287808    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0805 04:39:28.287964    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0805 04:39:28.290222    9870 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0805 04:39:28.290231    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0805 04:39:28.298015    9870 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0805 04:39:28.298038    9870 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:39:28.298095    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0805 04:39:28.303574    9870 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0805 04:39:28.303595    9870 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0805 04:39:28.303650    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0805 04:39:28.345261    9870 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0805 04:39:28.345303    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 04:39:28.345304    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 04:39:28.345409    9870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0805 04:39:28.345430    9870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0805 04:39:28.346994    9870 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0805 04:39:28.347007    9870 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0805 04:39:28.347014    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0805 04:39:28.347020    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0805 04:39:28.360536    9870 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0805 04:39:28.360551    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0805 04:39:28.361473    9870 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 04:39:28.361571    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:28.454735    9870 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0805 04:39:28.454737    9870 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0805 04:39:28.454768    9870 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:28.454824    9870 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:28.483952    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 04:39:28.484073    9870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0805 04:39:28.495896    9870 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0805 04:39:28.495931    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0805 04:39:28.567356    9870 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 04:39:28.567373    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0805 04:39:28.906093    9870 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 04:39:28.906117    9870 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0805 04:39:28.906124    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0805 04:39:29.060397    9870 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0805 04:39:29.060436    9870 cache_images.go:92] duration metric: took 1.317244959s to LoadCachedImages
	W0805 04:39:29.060483    9870 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0805 04:39:29.060488    9870 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0805 04:39:29.060536    9870 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-528000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 04:39:29.060598    9870 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 04:39:29.074315    9870 cni.go:84] Creating CNI manager for ""
	I0805 04:39:29.074328    9870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:39:29.074335    9870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 04:39:29.074344    9870 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-528000 NodeName:stopped-upgrade-528000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 04:39:29.074418    9870 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-528000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 04:39:29.074468    9870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0805 04:39:29.077252    9870 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 04:39:29.077281    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 04:39:29.080261    9870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0805 04:39:29.085220    9870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 04:39:29.090260    9870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0805 04:39:29.095472    9870 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0805 04:39:29.096660    9870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 04:39:29.100104    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:29.178024    9870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 04:39:29.187771    9870 certs.go:68] Setting up /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000 for IP: 10.0.2.15
	I0805 04:39:29.187780    9870 certs.go:194] generating shared ca certs ...
	I0805 04:39:29.187788    9870 certs.go:226] acquiring lock for ca certs: {Name:mk0fb10f8f63b8d852122cff16e2a9135337707a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:29.187964    9870 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.key
	I0805 04:39:29.188021    9870 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/proxy-client-ca.key
	I0805 04:39:29.188029    9870 certs.go:256] generating profile certs ...
	I0805 04:39:29.188105    9870 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/client.key
	I0805 04:39:29.188125    9870 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key.80e3a405
	I0805 04:39:29.188137    9870 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt.80e3a405 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0805 04:39:29.271695    9870 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt.80e3a405 ...
	I0805 04:39:29.271706    9870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt.80e3a405: {Name:mk376af323afd036739999d344555f5c14c23460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:29.272043    9870 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key.80e3a405 ...
	I0805 04:39:29.272047    9870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key.80e3a405: {Name:mk975eee9cf97d8164af586ccad65f113a3237f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:29.272185    9870 certs.go:381] copying /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt.80e3a405 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt
	I0805 04:39:29.272322    9870 certs.go:385] copying /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key.80e3a405 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key
	I0805 04:39:29.272468    9870 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/proxy-client.key
	I0805 04:39:29.272593    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/7624.pem (1338 bytes)
	W0805 04:39:29.272619    9870 certs.go:480] ignoring /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/7624_empty.pem, impossibly tiny 0 bytes
	I0805 04:39:29.272624    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 04:39:29.272649    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem (1078 bytes)
	I0805 04:39:29.272667    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem (1123 bytes)
	I0805 04:39:29.272691    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/key.pem (1675 bytes)
	I0805 04:39:29.272731    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem (1708 bytes)
	I0805 04:39:29.273092    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 04:39:29.280242    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 04:39:29.287279    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 04:39:29.293607    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 04:39:29.300575    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 04:39:29.308146    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 04:39:29.315550    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 04:39:29.323067    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 04:39:29.329918    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/7624.pem --> /usr/share/ca-certificates/7624.pem (1338 bytes)
	I0805 04:39:29.336664    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem --> /usr/share/ca-certificates/76242.pem (1708 bytes)
	I0805 04:39:29.343803    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 04:39:29.350926    9870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 04:39:29.356139    9870 ssh_runner.go:195] Run: openssl version
	I0805 04:39:29.358048    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76242.pem && ln -fs /usr/share/ca-certificates/76242.pem /etc/ssl/certs/76242.pem"
	I0805 04:39:29.361760    9870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76242.pem
	I0805 04:39:29.363141    9870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:23 /usr/share/ca-certificates/76242.pem
	I0805 04:39:29.363158    9870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76242.pem
	I0805 04:39:29.364897    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76242.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 04:39:29.368276    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 04:39:29.371603    9870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 04:39:29.373106    9870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:35 /usr/share/ca-certificates/minikubeCA.pem
	I0805 04:39:29.373126    9870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 04:39:29.374948    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 04:39:29.377750    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7624.pem && ln -fs /usr/share/ca-certificates/7624.pem /etc/ssl/certs/7624.pem"
	I0805 04:39:29.380957    9870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7624.pem
	I0805 04:39:29.382637    9870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:23 /usr/share/ca-certificates/7624.pem
	I0805 04:39:29.382661    9870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7624.pem
	I0805 04:39:29.384449    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7624.pem /etc/ssl/certs/51391683.0"
	I0805 04:39:29.388006    9870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 04:39:29.389566    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 04:39:29.391628    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 04:39:29.393554    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 04:39:29.395502    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 04:39:29.397293    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 04:39:29.399076    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 04:39:29.400805    9870 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51465 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 04:39:29.400864    9870 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 04:39:29.411453    9870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 04:39:29.414360    9870 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 04:39:29.414367    9870 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 04:39:29.414389    9870 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 04:39:29.417104    9870 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 04:39:29.417413    9870 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-528000" does not appear in /Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:39:29.417513    9870 kubeconfig.go:62] /Users/jenkins/minikube-integration/19377-7130/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-528000" cluster setting kubeconfig missing "stopped-upgrade-528000" context setting]
	I0805 04:39:29.417731    9870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/kubeconfig: {Name:mk9388f295704cbd2679ba0e5c0bb91678f79ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:29.418189    9870 kapi.go:59] client config for stopped-upgrade-528000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/client.key", CAFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1024d01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 04:39:29.418514    9870 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 04:39:29.421101    9870 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-528000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0805 04:39:29.421107    9870 kubeadm.go:1160] stopping kube-system containers ...
	I0805 04:39:29.421143    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 04:39:29.434709    9870 docker.go:483] Stopping containers: [0f824af6ef04 2ce668670762 d9ac8003079b c61b252b6587 eeef0a622ba7 c3de4560f438 9d1e43dbed7e fdcbbe9ff0d6 e320788f24f2]
	I0805 04:39:29.434776    9870 ssh_runner.go:195] Run: docker stop 0f824af6ef04 2ce668670762 d9ac8003079b c61b252b6587 eeef0a622ba7 c3de4560f438 9d1e43dbed7e fdcbbe9ff0d6 e320788f24f2
	I0805 04:39:29.445816    9870 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 04:39:29.451341    9870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 04:39:29.454066    9870 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 04:39:29.454071    9870 kubeadm.go:157] found existing configuration files:
	
	I0805 04:39:29.454093    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/admin.conf
	I0805 04:39:29.456699    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 04:39:29.456721    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 04:39:29.459713    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/kubelet.conf
	I0805 04:39:29.462225    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 04:39:29.462246    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 04:39:29.464730    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/controller-manager.conf
	I0805 04:39:29.467716    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 04:39:29.467741    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 04:39:29.470282    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/scheduler.conf
	I0805 04:39:29.472669    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 04:39:29.472690    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 04:39:29.475539    9870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 04:39:29.478198    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:39:29.500488    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:39:29.821566    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:39:27.713538    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:27.713682    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:27.730847    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:27.730922    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:27.742827    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:27.742898    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:27.755521    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:27.755574    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:27.772186    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:27.772262    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:27.783279    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:27.783328    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:27.795055    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:27.795112    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:27.808433    9720 logs.go:276] 0 containers: []
	W0805 04:39:27.808445    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:27.808497    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:27.820390    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:27.820410    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:27.820415    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:27.840734    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:27.840744    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:27.865420    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:27.865437    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:27.878290    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:27.878301    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:27.882888    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:27.882900    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:27.923583    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:27.923594    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:27.938212    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:27.938223    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:27.962412    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:27.962424    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:27.975141    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:27.975152    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:28.014428    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:28.014444    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:28.029616    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:28.029626    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:28.043048    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:28.043059    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:28.059785    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:28.059796    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:28.072965    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:28.072975    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:28.110538    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:28.110548    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:28.125628    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:28.125639    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:28.136969    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:28.136984    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:30.656586    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:29.949457    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:39:29.974777    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:39:30.000669    9870 api_server.go:52] waiting for apiserver process to appear ...
	I0805 04:39:30.000742    9870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:39:30.502981    9870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:39:31.002816    9870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:39:31.006970    9870 api_server.go:72] duration metric: took 1.006296583s to wait for apiserver process to appear ...
	I0805 04:39:31.006977    9870 api_server.go:88] waiting for apiserver healthz status ...
	I0805 04:39:31.006986    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:35.658979    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:35.659474    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:35.696901    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:35.697045    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:35.717372    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:35.717467    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:35.732259    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:35.732332    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:35.748076    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:35.748149    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:35.758539    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:35.758605    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:35.769664    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:35.769735    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:35.781093    9720 logs.go:276] 0 containers: []
	W0805 04:39:35.781108    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:35.781161    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:35.798684    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:35.798709    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:35.798715    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:35.811679    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:35.811692    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:35.830889    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:35.830899    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:35.843162    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:35.843174    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:35.859205    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:35.859216    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:35.899084    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:35.899106    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:35.910926    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:35.910945    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:35.915057    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:35.915065    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:35.937347    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:35.937363    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:35.949032    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:35.949046    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:35.967221    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:35.967231    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:35.990845    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:35.990858    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:36.002594    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:36.002608    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:36.037335    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:36.037345    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:36.052213    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:36.052227    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:36.066292    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:36.066302    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:36.104555    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:36.104570    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:36.009172    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:36.009203    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:38.621539    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:41.009587    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:41.009635    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:43.623895    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:43.624118    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:39:43.645697    9720 logs.go:276] 2 containers: [452a7ef216d4 ba2510eb9fe9]
	I0805 04:39:43.645814    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:39:43.661321    9720 logs.go:276] 2 containers: [ee8d052ddc4b 571fe6bf4cec]
	I0805 04:39:43.661391    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:39:43.673975    9720 logs.go:276] 1 containers: [15f86cf7ed1c]
	I0805 04:39:43.674037    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:39:43.685201    9720 logs.go:276] 2 containers: [916af5d0eb1a 2ecf263175c1]
	I0805 04:39:43.685270    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:39:43.695605    9720 logs.go:276] 1 containers: [0e6709d76485]
	I0805 04:39:43.695662    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:39:43.705842    9720 logs.go:276] 2 containers: [d4061c1f9fe7 ab095ace8ff8]
	I0805 04:39:43.705907    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:39:43.716423    9720 logs.go:276] 0 containers: []
	W0805 04:39:43.716433    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:39:43.716485    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:39:43.727251    9720 logs.go:276] 2 containers: [39c2bd30535a d288b863774e]
	I0805 04:39:43.727269    9720 logs.go:123] Gathering logs for storage-provisioner [39c2bd30535a] ...
	I0805 04:39:43.727274    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39c2bd30535a"
	I0805 04:39:43.741018    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:39:43.741029    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:39:43.776792    9720 logs.go:123] Gathering logs for kube-apiserver [ba2510eb9fe9] ...
	I0805 04:39:43.776802    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba2510eb9fe9"
	I0805 04:39:43.814630    9720 logs.go:123] Gathering logs for kube-controller-manager [ab095ace8ff8] ...
	I0805 04:39:43.814643    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab095ace8ff8"
	I0805 04:39:43.828817    9720 logs.go:123] Gathering logs for kube-controller-manager [d4061c1f9fe7] ...
	I0805 04:39:43.828833    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4061c1f9fe7"
	I0805 04:39:43.846289    9720 logs.go:123] Gathering logs for kube-apiserver [452a7ef216d4] ...
	I0805 04:39:43.846298    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 452a7ef216d4"
	I0805 04:39:43.860982    9720 logs.go:123] Gathering logs for coredns [15f86cf7ed1c] ...
	I0805 04:39:43.860992    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f86cf7ed1c"
	I0805 04:39:43.872379    9720 logs.go:123] Gathering logs for kube-scheduler [2ecf263175c1] ...
	I0805 04:39:43.872389    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ecf263175c1"
	I0805 04:39:43.887809    9720 logs.go:123] Gathering logs for kube-scheduler [916af5d0eb1a] ...
	I0805 04:39:43.887819    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 916af5d0eb1a"
	I0805 04:39:43.904387    9720 logs.go:123] Gathering logs for kube-proxy [0e6709d76485] ...
	I0805 04:39:43.904402    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6709d76485"
	I0805 04:39:43.916503    9720 logs.go:123] Gathering logs for storage-provisioner [d288b863774e] ...
	I0805 04:39:43.916513    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d288b863774e"
	I0805 04:39:43.927885    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:39:43.927899    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:39:43.939871    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:39:43.939885    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:39:43.977318    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:39:43.977326    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:39:43.981815    9720 logs.go:123] Gathering logs for etcd [ee8d052ddc4b] ...
	I0805 04:39:43.981821    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee8d052ddc4b"
	I0805 04:39:43.995780    9720 logs.go:123] Gathering logs for etcd [571fe6bf4cec] ...
	I0805 04:39:43.995795    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 571fe6bf4cec"
	I0805 04:39:44.013981    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:39:44.013992    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:39:46.538341    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:46.010164    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:46.010186    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:51.540693    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:51.540777    9720 kubeadm.go:597] duration metric: took 4m4.466651709s to restartPrimaryControlPlane
	W0805 04:39:51.540873    9720 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 04:39:51.540910    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 04:39:52.585663    9720 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.044730167s)
	I0805 04:39:52.585723    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 04:39:52.590916    9720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 04:39:52.593830    9720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 04:39:52.596922    9720 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 04:39:52.596929    9720 kubeadm.go:157] found existing configuration files:
	
	I0805 04:39:52.596952    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/admin.conf
	I0805 04:39:52.599734    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 04:39:52.599755    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 04:39:52.602339    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/kubelet.conf
	I0805 04:39:52.605343    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 04:39:52.605364    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 04:39:52.608489    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/controller-manager.conf
	I0805 04:39:52.611139    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 04:39:52.611161    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 04:39:52.613785    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/scheduler.conf
	I0805 04:39:52.616702    9720 kubeadm.go:163] "https://control-plane.minikube.internal:51233" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51233 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 04:39:52.616722    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 04:39:52.619526    9720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 04:39:52.636661    9720 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 04:39:52.636706    9720 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 04:39:52.684886    9720 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 04:39:52.684947    9720 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 04:39:52.684997    9720 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 04:39:52.733461    9720 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 04:39:52.741459    9720 out.go:204]   - Generating certificates and keys ...
	I0805 04:39:52.741493    9720 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 04:39:52.741531    9720 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 04:39:52.741576    9720 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 04:39:52.741610    9720 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 04:39:52.741651    9720 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 04:39:52.741682    9720 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 04:39:52.741722    9720 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 04:39:52.741759    9720 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 04:39:52.741799    9720 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 04:39:52.741842    9720 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 04:39:52.741871    9720 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 04:39:52.741909    9720 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 04:39:52.930897    9720 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 04:39:53.047113    9720 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 04:39:53.171273    9720 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 04:39:53.218644    9720 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 04:39:53.252428    9720 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 04:39:53.252741    9720 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 04:39:53.252772    9720 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 04:39:53.341628    9720 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 04:39:51.010754    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:51.010812    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:53.345800    9720 out.go:204]   - Booting up control plane ...
	I0805 04:39:53.345862    9720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 04:39:53.345941    9720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 04:39:53.346096    9720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 04:39:53.346170    9720 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 04:39:53.347543    9720 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 04:39:57.349947    9720 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002304 seconds
	I0805 04:39:57.350052    9720 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 04:39:57.355368    9720 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 04:39:57.873958    9720 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 04:39:57.874356    9720 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-763000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 04:39:58.378547    9720 kubeadm.go:310] [bootstrap-token] Using token: 0ez5dh.g9773038io9n2e5d
	I0805 04:39:58.381472    9720 out.go:204]   - Configuring RBAC rules ...
	I0805 04:39:58.381537    9720 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 04:39:58.381592    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 04:39:58.388562    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 04:39:58.389584    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 04:39:58.390721    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 04:39:58.391743    9720 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 04:39:58.396837    9720 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 04:39:58.579495    9720 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 04:39:58.782105    9720 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 04:39:58.782538    9720 kubeadm.go:310] 
	I0805 04:39:58.782570    9720 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 04:39:58.782575    9720 kubeadm.go:310] 
	I0805 04:39:58.782612    9720 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 04:39:58.782621    9720 kubeadm.go:310] 
	I0805 04:39:58.782633    9720 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 04:39:58.782711    9720 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 04:39:58.782773    9720 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 04:39:58.782795    9720 kubeadm.go:310] 
	I0805 04:39:58.782841    9720 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 04:39:58.782863    9720 kubeadm.go:310] 
	I0805 04:39:58.782892    9720 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 04:39:58.782894    9720 kubeadm.go:310] 
	I0805 04:39:58.782928    9720 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 04:39:58.782981    9720 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 04:39:58.783080    9720 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 04:39:58.783086    9720 kubeadm.go:310] 
	I0805 04:39:58.783134    9720 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 04:39:58.783180    9720 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 04:39:58.783183    9720 kubeadm.go:310] 
	I0805 04:39:58.783232    9720 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0ez5dh.g9773038io9n2e5d \
	I0805 04:39:58.783301    9720 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:00ad0c80a9f7b4b654bf16d7fdaf8cb3872452317480a453e3b9036c421b1809 \
	I0805 04:39:58.783316    9720 kubeadm.go:310] 	--control-plane 
	I0805 04:39:58.783319    9720 kubeadm.go:310] 
	I0805 04:39:58.783363    9720 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 04:39:58.783367    9720 kubeadm.go:310] 
	I0805 04:39:58.783406    9720 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0ez5dh.g9773038io9n2e5d \
	I0805 04:39:58.783460    9720 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:00ad0c80a9f7b4b654bf16d7fdaf8cb3872452317480a453e3b9036c421b1809 
	I0805 04:39:58.783516    9720 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 04:39:58.783524    9720 cni.go:84] Creating CNI manager for ""
	I0805 04:39:58.783531    9720 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:39:58.789252    9720 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 04:39:58.796300    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 04:39:58.799313    9720 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 04:39:58.806718    9720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 04:39:58.806817    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 04:39:58.806818    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-763000 minikube.k8s.io/updated_at=2024_08_05T04_39_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=running-upgrade-763000 minikube.k8s.io/primary=true
	I0805 04:39:58.869699    9720 kubeadm.go:1113] duration metric: took 62.963542ms to wait for elevateKubeSystemPrivileges
	I0805 04:39:58.869754    9720 ops.go:34] apiserver oom_adj: -16
	I0805 04:39:58.869759    9720 kubeadm.go:394] duration metric: took 4m11.808806833s to StartCluster
	I0805 04:39:58.869769    9720 settings.go:142] acquiring lock: {Name:mk4ccaf175b574f554efa4f63e0208c978f3f34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:58.869937    9720 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:39:58.870305    9720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/kubeconfig: {Name:mk9388f295704cbd2679ba0e5c0bb91678f79ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:58.870539    9720 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:39:58.870609    9720 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:39:58.870636    9720 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 04:39:58.870671    9720 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-763000"
	I0805 04:39:58.870678    9720 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-763000"
	I0805 04:39:58.870683    9720 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-763000"
	W0805 04:39:58.870686    9720 addons.go:243] addon storage-provisioner should already be in state true
	I0805 04:39:58.870689    9720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-763000"
	I0805 04:39:58.870698    9720 host.go:66] Checking if "running-upgrade-763000" exists ...
	I0805 04:39:58.871633    9720 kapi.go:59] client config for running-upgrade-763000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/running-upgrade-763000/client.key", CAFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105fe41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 04:39:58.871756    9720 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-763000"
	W0805 04:39:58.871761    9720 addons.go:243] addon default-storageclass should already be in state true
	I0805 04:39:58.871767    9720 host.go:66] Checking if "running-upgrade-763000" exists ...
	I0805 04:39:58.875140    9720 out.go:177] * Verifying Kubernetes components...
	I0805 04:39:58.875612    9720 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 04:39:58.878450    9720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 04:39:58.878456    9720 sshutil.go:53] new ssh client: &{IP:localhost Port:51201 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I0805 04:39:58.881159    9720 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:56.011667    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:56.011691    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:58.884253    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:58.890218    9720 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 04:39:58.890226    9720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 04:39:58.890235    9720 sshutil.go:53] new ssh client: &{IP:localhost Port:51201 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I0805 04:39:58.978063    9720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 04:39:58.983344    9720 api_server.go:52] waiting for apiserver process to appear ...
	I0805 04:39:58.983391    9720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:39:58.987305    9720 api_server.go:72] duration metric: took 116.754625ms to wait for apiserver process to appear ...
	I0805 04:39:58.987313    9720 api_server.go:88] waiting for apiserver healthz status ...
	I0805 04:39:58.987319    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:58.993102    9720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 04:39:59.009124    9720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 04:40:01.012996    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:01.013040    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:03.989463    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:03.989508    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:06.014366    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:06.014500    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:08.989858    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:08.989886    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:11.016610    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:11.016654    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:13.990275    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:13.990307    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:16.018851    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:16.018904    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:18.991279    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:18.991312    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:21.021353    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:21.021391    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:23.992020    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:23.992069    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:28.993037    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:28.993084    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 04:40:29.331459    9720 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 04:40:29.337032    9720 out.go:177] * Enabled addons: storage-provisioner
	I0805 04:40:26.023726    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:26.023771    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:29.343921    9720 addons.go:510] duration metric: took 30.473047166s for enable addons: enabled=[storage-provisioner]
	I0805 04:40:31.026129    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:31.026349    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:40:31.043480    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:40:31.043565    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:40:31.057171    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:40:31.057241    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:40:31.068518    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:40:31.068583    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:40:31.079052    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:40:31.079117    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:40:31.090207    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:40:31.090272    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:40:31.104816    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:40:31.104882    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:40:31.115085    9870 logs.go:276] 0 containers: []
	W0805 04:40:31.115097    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:40:31.115147    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:40:31.125374    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:40:31.125391    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:40:31.125397    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:40:31.165246    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:40:31.165257    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:40:31.209669    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:40:31.209679    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:40:31.222332    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:40:31.222343    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:40:31.234054    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:40:31.234063    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:40:31.248422    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:40:31.248433    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:40:31.259439    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:40:31.259449    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:40:31.284799    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:40:31.284807    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:40:31.289300    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:40:31.289321    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:40:31.390961    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:40:31.390973    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:40:31.405870    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:40:31.405881    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:40:31.417556    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:40:31.417569    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:40:31.428707    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:40:31.428718    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:40:31.456348    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:40:31.456360    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:40:31.474749    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:40:31.474760    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:40:31.494727    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:40:31.494740    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:40:31.510081    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:40:31.510094    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:40:34.024224    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:33.994359    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:33.994421    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:39.026464    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:39.026613    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:40:39.047925    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:40:39.048036    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:40:39.062005    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:40:39.062079    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:40:39.073586    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:40:39.073657    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:40:39.084217    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:40:39.084285    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:40:39.094885    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:40:39.094951    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:40:39.104909    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:40:39.104970    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:40:39.115224    9870 logs.go:276] 0 containers: []
	W0805 04:40:39.115234    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:40:39.115289    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:40:39.125810    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:40:39.125828    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:40:39.125833    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:40:39.140364    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:40:39.140375    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:40:39.151918    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:40:39.151929    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:40:39.172172    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:40:39.172182    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:40:39.183170    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:40:39.183181    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:40:39.187377    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:40:39.187383    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:40:39.205379    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:40:39.205389    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:40:39.217546    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:40:39.217556    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:40:39.256669    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:40:39.256688    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:40:39.271187    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:40:39.271201    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:40:39.291180    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:40:39.291193    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:40:39.302585    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:40:39.302595    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:40:39.326115    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:40:39.326122    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:40:39.362991    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:40:39.363002    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:40:39.384335    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:40:39.384348    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:40:39.398134    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:40:39.398144    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:40:39.409680    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:40:39.409692    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:40:38.996389    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:38.996444    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:41.950069    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:43.998564    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:43.998608    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:46.952559    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:46.952811    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:40:46.978461    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:40:46.978584    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:40:46.995789    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:40:46.995862    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:40:47.008241    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:40:47.008312    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:40:47.023648    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:40:47.023713    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:40:47.033925    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:40:47.033990    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:40:47.045277    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:40:47.045343    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:40:47.055761    9870 logs.go:276] 0 containers: []
	W0805 04:40:47.055773    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:40:47.055823    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:40:47.065847    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:40:47.065863    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:40:47.065870    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:40:47.100457    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:40:47.100471    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:40:47.114435    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:40:47.114449    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:40:47.130804    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:40:47.130819    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:40:47.148179    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:40:47.148189    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:40:47.161637    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:40:47.161647    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:40:47.201304    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:40:47.201313    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:40:47.213067    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:40:47.213088    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:40:47.231698    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:40:47.231709    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:40:47.235630    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:40:47.235640    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:40:47.272372    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:40:47.272390    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:40:47.294019    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:40:47.294029    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:40:47.306343    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:40:47.306358    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:40:47.320543    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:40:47.320556    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:40:47.331850    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:40:47.331861    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:40:47.344958    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:40:47.344967    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:40:47.356816    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:40:47.356826    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:40:49.000995    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:49.001041    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:49.882072    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:54.003425    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:54.003460    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:54.883294    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:54.883451    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:40:54.905651    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:40:54.905745    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:40:54.922881    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:40:54.922968    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:40:54.934457    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:40:54.934526    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:40:54.945314    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:40:54.945382    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:40:54.956638    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:40:54.956706    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:40:54.967850    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:40:54.967913    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:40:54.977943    9870 logs.go:276] 0 containers: []
	W0805 04:40:54.977954    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:40:54.978009    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:40:54.988453    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:40:54.988471    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:40:54.988478    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:40:55.025755    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:40:55.025765    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:40:55.036733    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:40:55.036744    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:40:55.049572    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:40:55.049582    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:40:55.060710    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:40:55.060721    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:40:55.064926    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:40:55.064931    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:40:55.100218    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:40:55.100229    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:40:55.114429    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:40:55.114439    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:40:55.136527    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:40:55.136538    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:40:55.148268    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:40:55.148283    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:40:55.160579    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:40:55.160590    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:40:55.177673    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:40:55.177683    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:40:55.189121    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:40:55.189130    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:40:55.225484    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:40:55.225492    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:40:55.240978    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:40:55.240990    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:40:55.254541    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:40:55.254550    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:40:55.274410    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:40:55.274422    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:40:57.801926    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:59.005777    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:59.005863    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:40:59.016785    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:40:59.016863    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:40:59.028054    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:40:59.028132    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:40:59.045783    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:40:59.045868    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:40:59.066924    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:40:59.066990    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:40:59.077490    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:40:59.077557    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:40:59.088008    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:40:59.088069    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:40:59.097977    9720 logs.go:276] 0 containers: []
	W0805 04:40:59.097989    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:40:59.098041    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:40:59.109086    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:40:59.109108    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:40:59.109114    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:40:59.123100    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:40:59.123110    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:40:59.134579    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:40:59.134589    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:40:59.146164    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:40:59.146174    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:40:59.157102    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:40:59.157113    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:40:59.181480    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:40:59.181488    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:40:59.185948    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:40:59.185954    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:40:59.224060    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:40:59.224072    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:40:59.235544    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:40:59.235554    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:40:59.250949    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:40:59.250960    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:40:59.268720    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:40:59.268731    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:40:59.281494    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:40:59.281508    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:40:59.320333    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:40:59.320343    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:01.836241    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:02.804688    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:02.804861    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:02.820783    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:02.820857    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:02.833913    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:02.833988    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:02.844724    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:02.844812    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:02.854914    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:02.854987    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:02.865190    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:02.865254    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:02.875436    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:02.875494    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:02.885741    9870 logs.go:276] 0 containers: []
	W0805 04:41:02.885753    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:02.885809    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:02.898553    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:02.898572    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:02.898578    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:02.919743    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:02.919755    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:02.937425    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:02.937436    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:02.951695    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:02.951705    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:02.965922    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:02.965933    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:02.981238    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:02.981250    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:02.997294    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:02.997309    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:03.011497    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:03.011507    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:03.015497    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:03.015512    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:03.049857    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:03.049868    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:03.089139    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:03.089158    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:03.102751    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:03.102765    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:03.114751    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:03.114763    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:03.125875    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:03.125890    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:03.138585    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:03.138597    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:03.178683    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:03.178701    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:03.203910    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:03.203918    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:06.839124    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:06.839239    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:06.852323    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:06.852394    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:06.863464    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:06.863529    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:06.874521    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:06.874588    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:06.885446    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:06.885515    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:06.896063    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:06.896130    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:06.906911    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:06.906975    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:06.918188    9720 logs.go:276] 0 containers: []
	W0805 04:41:06.918199    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:06.918254    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:06.928713    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:06.928730    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:06.928735    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:06.941129    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:06.941139    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:06.956232    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:06.956242    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:06.968216    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:06.968231    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:06.987333    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:06.987344    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:07.012338    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:07.012346    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:07.027161    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:07.027172    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:07.064782    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:07.064789    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:07.076467    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:07.076481    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:07.091230    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:07.091240    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:07.104813    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:07.104823    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:07.120049    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:07.120058    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:07.124230    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:07.124238    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:05.720555    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:09.664797    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:10.721896    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:10.722117    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:10.740495    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:10.740597    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:10.754430    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:10.754513    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:10.766476    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:10.766547    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:10.778277    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:10.778353    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:10.789291    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:10.789356    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:10.799758    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:10.799818    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:10.809974    9870 logs.go:276] 0 containers: []
	W0805 04:41:10.809987    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:10.810038    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:10.820582    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:10.820598    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:10.820604    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:10.857239    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:10.857249    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:10.878192    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:10.878203    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:10.891808    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:10.891818    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:10.905450    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:10.905461    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:10.909749    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:10.909756    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:10.930191    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:10.930205    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:10.946157    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:10.946168    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:10.960215    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:10.960225    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:10.981729    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:10.981738    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:10.997350    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:10.997362    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:11.010595    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:11.010604    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:11.021771    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:11.021782    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:11.058000    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:11.058013    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:11.095676    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:11.095689    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:11.113616    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:11.113630    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:11.139084    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:11.139100    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:13.653467    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:14.667171    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:14.667365    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:14.683659    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:14.683740    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:14.696146    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:14.696218    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:14.707831    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:14.707898    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:14.718031    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:14.718094    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:14.728118    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:14.728190    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:14.738151    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:14.738214    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:14.748430    9720 logs.go:276] 0 containers: []
	W0805 04:41:14.748441    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:14.748489    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:14.758718    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:14.758733    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:14.758739    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:14.771276    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:14.771288    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:14.783892    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:14.783903    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:14.807196    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:14.807203    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:14.811414    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:14.811421    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:14.851291    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:14.851302    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:14.866296    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:14.866312    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:14.881664    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:14.881680    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:14.904384    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:14.904398    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:14.917221    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:14.917239    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:14.956202    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:14.956215    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:14.970576    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:14.970586    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:14.982867    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:14.982878    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:18.655964    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:18.656169    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:18.681124    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:18.681215    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:18.698281    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:18.698351    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:18.711296    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:18.711359    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:18.728083    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:18.728161    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:18.738251    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:18.738322    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:18.749302    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:18.749370    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:18.759262    9870 logs.go:276] 0 containers: []
	W0805 04:41:18.759274    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:18.759329    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:18.770254    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:18.770270    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:18.770278    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:18.775334    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:18.775342    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:18.810645    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:18.810659    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:18.824562    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:18.824577    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:18.836318    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:18.836329    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:18.855172    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:18.855182    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:18.893452    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:18.893460    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:18.907354    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:18.907368    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:18.926160    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:18.926173    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:18.942638    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:18.942649    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:18.967720    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:18.967728    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:18.983035    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:18.983045    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:19.021944    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:19.021955    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:19.036850    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:19.036884    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:19.047798    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:19.047811    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:19.065302    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:19.065312    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:19.080998    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:19.081009    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:17.501896    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:21.604980    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:22.504218    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:22.504400    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:22.529430    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:22.529543    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:22.545780    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:22.545849    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:22.558623    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:22.558696    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:22.569625    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:22.569689    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:22.580020    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:22.580086    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:22.590444    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:22.590510    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:22.601944    9720 logs.go:276] 0 containers: []
	W0805 04:41:22.601960    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:22.602017    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:22.612819    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:22.612834    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:22.612839    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:22.649519    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:22.649532    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:22.665260    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:22.665273    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:22.678268    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:22.678277    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:22.699357    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:22.699371    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:22.715227    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:22.715239    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:22.727743    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:22.727755    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:22.750830    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:22.750837    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:22.786779    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:22.786787    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:22.791140    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:22.791146    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:22.808184    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:22.808198    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:22.819487    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:22.819496    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:22.830907    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:22.830922    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:25.344216    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:26.607744    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:26.608066    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:26.634267    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:26.634394    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:26.653451    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:26.653534    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:26.666870    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:26.666946    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:26.678621    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:26.678688    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:26.690579    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:26.690646    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:26.701467    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:26.701535    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:26.712244    9870 logs.go:276] 0 containers: []
	W0805 04:41:26.712254    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:26.712306    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:26.722938    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:26.722958    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:26.722963    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:26.727697    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:26.727706    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:26.767080    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:26.767091    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:26.780812    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:26.780822    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:26.792779    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:26.792791    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:26.807136    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:26.807147    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:26.823742    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:26.823752    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:26.861312    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:26.861320    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:26.896126    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:26.896136    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:26.908670    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:26.908685    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:26.920261    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:26.920273    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:26.932087    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:26.932100    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:26.946803    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:26.946816    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:26.960734    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:26.960744    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:26.974092    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:26.974102    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:26.995603    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:26.995614    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:27.013771    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:27.013781    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:29.539399    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:30.346657    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:30.346815    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:30.363416    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:30.363505    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:30.379632    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:30.379702    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:30.390613    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:30.390685    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:30.401093    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:30.401158    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:30.411618    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:30.411684    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:30.424818    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:30.424891    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:30.436199    9720 logs.go:276] 0 containers: []
	W0805 04:41:30.436210    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:30.436267    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:30.446435    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:30.446449    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:30.446455    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:30.461041    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:30.461050    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:30.476812    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:30.476822    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:30.488185    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:30.488194    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:30.493373    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:30.493380    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:30.528952    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:30.528962    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:30.547534    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:30.547544    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:30.559723    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:30.559737    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:30.571739    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:30.571753    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:30.583342    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:30.583352    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:30.601367    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:30.601380    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:30.626318    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:30.626326    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:30.664478    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:30.664489    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:34.541852    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:34.542047    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:34.562718    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:34.562827    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:34.577035    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:34.577107    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:34.599523    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:34.599583    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:34.614794    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:34.614852    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:34.626578    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:34.626641    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:34.646885    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:34.646953    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:34.657186    9870 logs.go:276] 0 containers: []
	W0805 04:41:34.657198    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:34.657248    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:34.668190    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:34.668207    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:34.668212    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:34.682044    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:34.682055    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:34.693797    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:34.693817    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:34.705190    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:34.705202    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:34.726377    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:34.726387    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:34.740205    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:34.740215    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:34.763453    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:34.763460    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:34.802506    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:34.802517    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:34.817240    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:34.817253    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:34.837467    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:34.837485    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:33.178889    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:34.871062    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:34.871076    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:34.883530    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:34.883542    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:34.921541    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:34.921553    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:34.935427    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:34.935442    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:34.946969    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:34.946980    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:34.986805    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:34.986818    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:34.991599    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:34.991606    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:37.506014    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:38.181319    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:38.181549    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:38.208567    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:38.208679    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:38.226088    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:38.226166    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:38.240298    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:38.240373    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:38.251685    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:38.251747    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:38.261682    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:38.261746    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:38.272262    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:38.272322    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:38.282709    9720 logs.go:276] 0 containers: []
	W0805 04:41:38.282722    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:38.282776    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:38.293406    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:38.293425    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:38.293430    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:38.305514    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:38.305524    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:38.330469    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:38.330478    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:38.368803    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:38.368811    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:38.373469    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:38.373476    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:38.386950    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:38.386960    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:38.398605    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:38.398615    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:38.409919    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:38.409928    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:38.424948    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:38.424958    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:38.461144    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:38.461156    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:38.476094    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:38.476103    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:38.488030    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:38.488040    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:38.505711    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:38.505725    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:41.022137    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:42.507518    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:42.507658    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:42.521233    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:42.521304    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:42.531626    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:42.531701    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:42.541809    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:42.541885    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:42.552070    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:42.552134    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:42.562470    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:42.562526    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:42.573540    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:42.573592    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:42.585240    9870 logs.go:276] 0 containers: []
	W0805 04:41:42.585252    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:42.585305    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:42.596249    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:42.596266    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:42.596273    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:42.600535    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:42.600548    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:42.639569    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:42.639580    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:42.655885    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:42.655897    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:42.669438    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:42.669452    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:42.706121    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:42.706131    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:42.720543    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:42.720553    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:42.733646    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:42.733655    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:42.745234    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:42.745244    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:42.762189    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:42.762198    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:42.788949    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:42.788958    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:42.827174    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:42.827183    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:42.842168    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:42.842178    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:42.855952    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:42.855962    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:42.867254    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:42.867263    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:42.888955    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:42.888972    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:42.903570    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:42.903582    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:46.024593    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:46.024839    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:46.049689    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:46.049797    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:46.067433    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:46.067504    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:46.080456    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:46.080521    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:46.092143    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:46.092212    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:46.102675    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:46.102744    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:46.113373    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:46.113439    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:46.127696    9720 logs.go:276] 0 containers: []
	W0805 04:41:46.127706    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:46.127760    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:46.138077    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:46.138092    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:46.138097    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:46.151000    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:46.151013    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:46.163334    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:46.163344    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:46.180764    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:46.180774    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:46.204686    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:46.204698    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:46.241747    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:46.241756    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:46.255272    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:46.255283    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:46.267132    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:46.267145    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:46.282272    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:46.282284    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:46.293906    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:46.293917    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:46.307758    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:46.307769    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:46.312300    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:46.312308    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:46.353386    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:46.353401    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:45.419767    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:48.870715    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:50.422141    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:50.422287    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:50.439193    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:50.439279    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:50.452567    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:50.452636    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:50.463997    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:50.464059    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:50.474436    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:50.474499    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:50.486079    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:50.486143    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:50.496983    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:50.497043    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:50.507051    9870 logs.go:276] 0 containers: []
	W0805 04:41:50.507062    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:50.507115    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:50.517590    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:50.517609    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:50.517615    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:50.532010    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:50.532020    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:50.543048    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:50.543059    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:50.555606    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:50.555617    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:50.559778    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:50.559785    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:50.582727    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:50.582734    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:50.620289    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:50.620300    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:50.631177    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:50.631189    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:50.652874    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:50.652886    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:50.668333    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:50.668346    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:50.681740    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:50.681753    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:50.693613    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:50.693624    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:50.710568    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:50.710579    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:50.724255    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:50.724266    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:50.762588    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:50.762596    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:50.798275    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:50.798286    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:50.813097    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:50.813109    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:53.326953    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:53.873004    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:53.873252    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:53.898125    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:41:53.898239    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:53.915563    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:41:53.915648    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:53.928384    9720 logs.go:276] 2 containers: [8ef432b0c449 5e231bd101ad]
	I0805 04:41:53.928454    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:53.939618    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:41:53.939684    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:53.954975    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:41:53.955046    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:53.965233    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:41:53.965298    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:53.975163    9720 logs.go:276] 0 containers: []
	W0805 04:41:53.975176    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:53.975230    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:53.985168    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:41:53.985185    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:53.985192    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:54.027840    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:41:54.027852    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:41:54.042172    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:41:54.042183    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:41:54.054007    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:41:54.054017    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:41:54.069416    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:54.069425    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:54.108065    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:54.108074    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:54.112573    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:41:54.112581    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:41:54.124411    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:41:54.124422    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:41:54.145328    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:41:54.145338    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:41:54.156980    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:54.156991    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:54.182712    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:41:54.182725    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:54.195913    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:41:54.195924    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:41:54.211077    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:41:54.211088    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:41:56.724832    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:58.328242    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:58.328331    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:58.344525    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:58.344593    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:58.355027    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:58.355092    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:58.370892    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:58.370965    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:58.381624    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:58.381696    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:58.392516    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:58.392584    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:58.403339    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:58.403402    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:58.414089    9870 logs.go:276] 0 containers: []
	W0805 04:41:58.414102    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:58.414156    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:58.425458    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:58.425476    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:58.425482    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:58.436608    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:58.436619    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:58.455201    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:58.455211    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:58.468615    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:58.468626    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:58.490260    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:58.490271    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:58.507565    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:58.507577    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:58.522060    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:58.522075    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:58.533346    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:58.533361    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:58.557834    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:58.557843    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:58.572659    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:58.572672    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:58.584052    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:58.584062    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:58.621411    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:58.621422    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:58.635647    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:58.635661    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:58.676203    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:58.676215    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:58.688130    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:58.688141    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:58.700069    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:58.700083    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:58.704155    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:58.704162    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:01.727197    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:01.727412    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:01.749717    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:01.749821    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:01.765865    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:01.765944    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:01.778638    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:01.778707    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:01.789659    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:01.789720    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:01.803183    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:01.803249    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:01.813741    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:01.813811    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:01.824275    9720 logs.go:276] 0 containers: []
	W0805 04:42:01.824285    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:01.824339    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:01.834715    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:01.834734    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:01.834739    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:01.849420    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:01.849431    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:01.864964    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:01.864975    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:01.889601    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:01.889612    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:01.902133    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:01.902147    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:01.938925    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:01.938933    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:01.950816    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:01.950848    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:01.964550    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:01.964560    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:01.976241    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:01.976253    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:01.990744    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:01.990755    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:02.002141    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:02.002156    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:02.014641    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:02.014651    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:02.026206    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:02.026218    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:02.049186    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:02.049196    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:02.053990    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:02.053999    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:01.241376    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:04.590489    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:06.243277    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:06.243469    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:06.265250    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:06.265345    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:06.279489    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:06.279564    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:06.291829    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:06.291897    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:06.302690    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:06.302757    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:06.313531    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:06.313599    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:06.324412    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:06.324480    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:06.334172    9870 logs.go:276] 0 containers: []
	W0805 04:42:06.334183    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:06.334236    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:06.345389    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:06.345407    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:06.345412    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:06.359382    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:06.359396    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:06.371072    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:06.371085    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:06.384711    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:06.384722    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:06.398584    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:06.398599    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:06.414350    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:06.414362    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:06.437627    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:06.437638    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:06.452073    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:06.452086    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:06.456405    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:06.456412    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:06.478544    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:06.478556    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:06.495836    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:06.495849    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:06.509827    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:06.509839    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:06.546916    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:06.546926    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:06.567147    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:06.567158    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:06.609315    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:06.609326    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:06.623503    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:06.623514    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:06.635655    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:06.635666    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:09.176301    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:09.591239    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:09.591456    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:09.615876    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:09.615979    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:09.631989    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:09.632075    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:09.645538    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:09.645625    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:09.662877    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:09.662946    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:09.673364    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:09.673431    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:09.683846    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:09.683907    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:09.694231    9720 logs.go:276] 0 containers: []
	W0805 04:42:09.694240    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:09.694289    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:09.704752    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:09.704773    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:09.704778    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:09.709732    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:09.709741    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:09.723811    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:09.723820    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:09.735759    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:09.735769    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:09.748184    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:09.748195    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:09.759573    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:09.759583    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:09.783669    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:09.783676    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:09.819342    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:09.819349    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:09.833971    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:09.833981    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:09.845761    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:09.845771    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:09.857406    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:09.857417    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:09.893961    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:09.893971    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:09.905669    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:09.905682    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:09.916964    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:09.916975    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:09.932299    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:09.932312    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:14.178720    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:14.179182    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:14.218336    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:14.218472    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:14.239936    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:14.240034    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:14.254577    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:14.254649    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:14.266972    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:14.267044    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:14.277990    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:14.278061    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:14.293221    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:14.293298    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:14.303329    9870 logs.go:276] 0 containers: []
	W0805 04:42:14.303339    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:14.303395    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:14.314104    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:14.314125    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:14.314131    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:14.318587    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:14.318594    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:14.333166    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:14.333177    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:14.345197    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:14.345209    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:14.359977    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:14.359988    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:14.371962    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:14.371974    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:14.389347    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:14.389357    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:14.403368    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:14.403378    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:14.416213    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:14.416223    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:14.428174    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:14.428189    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:14.442374    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:14.442386    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:14.458643    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:14.458653    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:14.486831    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:14.486842    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:14.499117    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:14.499133    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:14.524092    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:14.524103    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:14.561332    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:14.561341    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:14.596261    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:14.596273    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:12.451845    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:17.136977    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:17.454119    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:17.454195    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:17.466009    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:17.466074    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:17.480739    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:17.480806    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:17.491419    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:17.491495    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:17.501723    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:17.501792    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:17.513759    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:17.513824    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:17.524766    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:17.524827    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:17.535877    9720 logs.go:276] 0 containers: []
	W0805 04:42:17.535891    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:17.535943    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:17.551012    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:17.551029    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:17.551034    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:17.555561    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:17.555568    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:17.571524    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:17.571541    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:17.586053    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:17.586066    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:17.601629    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:17.601640    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:17.616907    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:17.616922    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:17.628735    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:17.628748    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:17.647031    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:17.647041    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:17.672335    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:17.672345    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:17.687076    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:17.687087    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:17.698704    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:17.698718    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:17.710396    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:17.710409    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:17.722125    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:17.722135    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:17.759757    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:17.759767    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:17.831740    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:17.831751    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:20.345739    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:22.138495    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:22.138908    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:22.178286    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:22.178410    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:22.201016    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:22.201113    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:22.216335    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:22.216410    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:22.228767    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:22.228851    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:22.241370    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:22.241435    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:22.252083    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:22.252145    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:22.265802    9870 logs.go:276] 0 containers: []
	W0805 04:42:22.265814    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:22.265873    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:22.276333    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:22.276372    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:22.276379    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:22.314651    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:22.314663    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:22.328600    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:22.328610    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:22.350193    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:22.350204    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:22.364348    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:22.364359    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:22.376073    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:22.376084    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:22.388343    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:22.388354    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:22.425282    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:22.425294    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:22.429106    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:22.429112    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:22.475051    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:22.475061    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:22.493110    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:22.493121    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:22.505005    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:22.505019    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:22.518908    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:22.518919    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:22.530616    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:22.530628    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:22.545045    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:22.545055    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:22.563911    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:22.563921    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:22.578245    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:22.578255    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:25.348384    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:25.348535    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:25.362854    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:25.362938    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:25.377276    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:25.377345    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:25.390657    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:25.390724    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:25.400991    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:25.401061    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:25.411265    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:25.411329    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:25.421791    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:25.421860    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:25.431643    9720 logs.go:276] 0 containers: []
	W0805 04:42:25.431655    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:25.431709    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:25.442670    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:25.442686    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:25.442692    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:25.454308    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:25.454319    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:25.466640    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:25.466652    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:25.486305    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:25.486315    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:25.511271    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:25.511281    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:25.523251    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:25.523263    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:25.561245    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:25.561256    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:25.575330    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:25.575342    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:25.588618    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:25.588633    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:25.600031    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:25.600045    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:25.605120    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:25.605127    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:25.619028    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:25.619037    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:25.646405    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:25.646415    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:25.684470    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:25.684484    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:25.696708    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:25.696722    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:25.103419    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:28.216464    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:30.105846    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:30.106133    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:30.135874    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:30.135994    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:30.155814    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:30.155898    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:30.169331    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:30.169405    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:30.181093    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:30.181171    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:30.192633    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:30.192701    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:30.204535    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:30.204605    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:30.214829    9870 logs.go:276] 0 containers: []
	W0805 04:42:30.214842    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:30.214895    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:30.225277    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:30.225295    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:30.225301    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:30.261864    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:30.261871    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:30.297817    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:30.297827    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:30.319185    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:30.319197    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:30.333458    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:30.333472    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:30.371228    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:30.371240    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:30.384764    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:30.384778    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:30.398612    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:30.398625    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:30.423621    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:30.423638    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:30.435774    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:30.435786    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:30.464016    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:30.464029    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:30.482955    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:30.482966    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:30.496883    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:30.496896    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:30.513356    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:30.513368    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:30.519047    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:30.519055    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:30.535798    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:30.535812    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:30.547556    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:30.547569    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:33.061751    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:33.218971    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:33.219158    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:33.241799    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:33.241925    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:33.257178    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:33.257272    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:33.270172    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:33.270243    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:33.282501    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:33.282562    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:33.293437    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:33.293500    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:33.307825    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:33.307891    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:33.318314    9720 logs.go:276] 0 containers: []
	W0805 04:42:33.318324    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:33.318376    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:33.328799    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:33.328814    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:33.328818    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:33.333346    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:33.333351    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:33.345081    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:33.345093    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:33.357064    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:33.357073    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:33.382286    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:33.382296    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:33.398200    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:33.398213    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:33.410301    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:33.410310    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:33.421727    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:33.421740    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:33.432928    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:33.432937    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:33.471554    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:33.471564    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:33.509183    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:33.509196    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:33.526860    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:33.526870    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:33.550750    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:33.550758    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:33.562115    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:33.562128    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:33.577620    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:33.577630    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:36.097438    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:38.064488    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:38.064653    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:38.077906    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:38.077984    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:38.088839    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:38.088899    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:38.099929    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:38.099998    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:38.110612    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:38.110686    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:38.121234    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:38.121294    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:38.131586    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:38.131648    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:38.141609    9870 logs.go:276] 0 containers: []
	W0805 04:42:38.141620    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:38.141678    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:38.152189    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:38.152209    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:38.152214    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:38.166178    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:38.166188    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:38.177087    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:38.177098    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:38.191695    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:38.191704    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:38.229236    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:38.229249    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:38.243095    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:38.243105    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:38.256564    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:38.256573    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:38.274115    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:38.274126    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:38.285636    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:38.285645    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:38.297097    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:38.297107    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:38.308099    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:38.308112    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:38.330654    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:38.330661    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:38.335094    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:38.335101    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:38.371263    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:38.371276    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:38.385898    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:38.385912    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:38.407036    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:38.407048    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:38.420020    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:38.420032    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:41.100010    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:41.100306    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:41.134509    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:41.134641    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:41.153918    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:41.154010    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:41.168791    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:41.168875    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:41.180909    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:41.180977    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:41.191721    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:41.191788    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:41.205721    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:41.205792    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:41.217368    9720 logs.go:276] 0 containers: []
	W0805 04:42:41.217381    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:41.217439    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:41.227998    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:41.228019    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:41.228025    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:41.243509    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:41.243519    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:41.266172    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:41.266186    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:41.281759    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:41.281772    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:41.309176    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:41.309190    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:41.323568    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:41.323577    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:41.358156    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:41.358166    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:41.372342    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:41.372353    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:41.384392    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:41.384404    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:41.422238    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:41.422253    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:41.426561    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:41.426568    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:41.440655    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:41.440664    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:41.452864    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:41.452876    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:41.465371    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:41.465386    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:41.479144    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:41.479155    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:40.961196    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:44.008793    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:45.963717    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:45.963933    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:45.987179    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:45.987294    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:46.003467    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:46.003542    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:46.018284    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:46.018358    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:46.029393    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:46.029464    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:46.042277    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:46.042345    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:46.052753    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:46.052825    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:46.062712    9870 logs.go:276] 0 containers: []
	W0805 04:42:46.062725    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:46.062781    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:46.073168    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:46.073186    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:46.073192    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:46.084388    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:46.084401    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:46.088639    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:46.088647    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:46.101298    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:46.101308    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:46.118435    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:46.118445    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:46.129974    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:46.129986    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:46.151622    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:46.151632    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:46.165189    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:46.165199    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:46.176918    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:46.176928    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:46.194919    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:46.194932    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:46.207298    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:46.207309    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:46.246613    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:46.246623    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:46.286133    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:46.286142    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:46.299777    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:46.299787    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:46.324066    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:46.324073    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:46.364188    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:46.364202    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:46.381836    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:46.381845    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:48.898812    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:49.010750    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:49.010829    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:49.021383    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:49.021450    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:49.032219    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:49.032279    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:49.043186    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:49.043255    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:49.053733    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:49.053800    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:49.064172    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:49.064236    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:49.074723    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:49.074783    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:49.084573    9720 logs.go:276] 0 containers: []
	W0805 04:42:49.084586    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:49.084637    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:49.095415    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:49.095437    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:49.095445    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:49.108048    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:49.108058    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:49.125874    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:49.125884    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:49.140201    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:49.140214    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:49.155403    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:49.155413    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:49.169477    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:49.169486    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:49.181380    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:49.181390    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:49.192929    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:49.192941    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:49.204632    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:49.204643    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:49.239680    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:49.239690    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:49.244164    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:49.244174    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:49.260428    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:49.260442    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:49.271938    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:49.271948    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:49.283736    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:49.283748    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:49.307038    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:49.307046    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:51.845350    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:53.901089    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:53.901255    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:53.920153    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:53.920237    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:53.933527    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:53.933595    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:53.944883    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:53.944945    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:53.955605    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:53.955663    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:53.966400    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:53.966464    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:53.977033    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:53.977095    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:53.987398    9870 logs.go:276] 0 containers: []
	W0805 04:42:53.987409    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:53.987461    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:53.998069    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:53.998087    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:53.998092    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:54.020365    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:54.020373    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:54.024395    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:54.024402    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:54.037820    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:54.037832    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:54.074818    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:54.074833    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:54.092032    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:54.092045    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:54.115388    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:54.115398    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:54.129778    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:54.129788    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:54.140834    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:54.140845    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:54.178851    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:54.178860    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:54.199531    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:54.199541    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:54.220001    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:54.220010    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:54.231602    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:54.231613    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:54.249069    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:54.249079    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:54.260213    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:54.260226    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:54.299056    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:54.299067    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:54.316563    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:54.316575    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:56.848105    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:56.848278    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:56.880326    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:42:56.880439    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:56.898993    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:42:56.899076    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:56.912069    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:42:56.912144    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:56.923683    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:42:56.923750    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:56.934386    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:42:56.934450    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:56.944673    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:42:56.944735    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:56.955437    9720 logs.go:276] 0 containers: []
	W0805 04:42:56.955448    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:56.955505    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:56.966859    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:42:56.966878    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:56.966884    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:57.005970    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:57.005978    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:57.041185    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:57.041195    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:57.064181    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:42:57.064187    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:42:57.079033    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:42:57.079047    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:42:57.090735    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:42:57.090744    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:42:57.108710    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:42:57.108720    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:57.123206    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:42:57.123216    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:42:57.134781    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:57.134792    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:57.139890    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:42:57.139897    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:42:57.157046    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:42:57.157060    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:42:57.171278    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:42:57.171288    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:42:57.182852    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:42:57.182863    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:42:57.213613    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:42:57.213624    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:42:57.233920    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:42:57.233930    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:42:56.828957    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:59.747239    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:01.831491    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:01.831819    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:01.862479    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:43:01.862589    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:01.882259    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:43:01.882345    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:01.896357    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:43:01.896429    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:01.908128    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:43:01.908197    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:01.918696    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:43:01.918755    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:01.929343    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:43:01.929411    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:01.939559    9870 logs.go:276] 0 containers: []
	W0805 04:43:01.939570    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:01.939627    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:01.950725    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:43:01.950743    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:43:01.950749    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:43:01.964924    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:43:01.964935    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:01.980255    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:01.980266    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:02.016990    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:43:02.017004    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:43:02.030710    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:43:02.030718    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:43:02.067884    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:43:02.067897    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:43:02.079459    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:43:02.079471    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:43:02.090696    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:02.090706    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:02.095494    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:43:02.095502    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:43:02.109334    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:43:02.109349    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:43:02.123491    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:43:02.123505    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:43:02.137980    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:02.137991    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:02.176300    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:43:02.176314    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:43:02.198675    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:43:02.198689    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:43:02.215617    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:43:02.215628    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:43:02.226849    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:43:02.226861    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:43:02.238422    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:02.238434    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:04.764069    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:04.749614    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:04.749750    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:04.760333    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:04.760409    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:04.771825    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:04.771895    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:04.783620    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:04.783688    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:04.799013    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:04.799088    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:04.809945    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:04.810010    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:04.820516    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:04.820576    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:04.830280    9720 logs.go:276] 0 containers: []
	W0805 04:43:04.830292    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:04.830349    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:04.840687    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:04.840705    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:04.840710    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:04.854517    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:04.854528    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:04.872715    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:04.872725    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:04.884049    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:04.884059    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:04.895575    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:04.895585    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:04.934474    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:04.934486    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:04.939429    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:04.939438    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:04.954182    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:04.954192    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:04.978317    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:04.978328    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:05.014084    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:05.014097    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:05.025812    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:05.025823    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:05.040685    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:05.040696    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:05.055427    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:05.055437    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:05.066682    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:05.066690    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:05.077804    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:05.077813    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:09.766452    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:09.766602    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:09.780171    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:43:09.780258    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:09.792056    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:43:09.792131    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:09.802799    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:43:09.802862    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:09.813595    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:43:09.813676    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:09.824198    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:43:09.824262    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:09.834849    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:43:09.835013    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:09.845368    9870 logs.go:276] 0 containers: []
	W0805 04:43:09.845378    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:09.845423    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:09.863000    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:43:09.863015    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:09.863021    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:07.591657    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:09.900800    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:43:09.900816    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:43:09.924415    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:43:09.924425    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:43:09.935950    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:43:09.935959    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:43:09.947268    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:43:09.947283    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:43:09.960696    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:09.960706    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:09.964856    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:43:09.964862    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:43:09.978881    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:43:09.978895    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:43:10.016349    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:43:10.016358    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:43:10.030096    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:43:10.030105    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:43:10.044680    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:43:10.044690    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:43:10.056140    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:43:10.056155    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:43:10.073845    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:43:10.073854    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:43:10.085021    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:10.085031    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:10.106740    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:10.106748    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:10.143499    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:43:10.143513    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:43:10.164066    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:43:10.164077    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:12.677844    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:12.594007    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:12.594244    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:12.617202    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:12.617295    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:12.633907    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:12.633979    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:12.646494    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:12.646567    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:12.657941    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:12.658009    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:12.669389    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:12.669458    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:12.680467    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:12.680524    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:12.690667    9720 logs.go:276] 0 containers: []
	W0805 04:43:12.690682    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:12.690728    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:12.701072    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:12.701089    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:12.701095    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:12.736523    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:12.736534    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:12.751259    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:12.751271    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:12.756293    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:12.756303    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:12.769191    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:12.769202    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:12.781215    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:12.781226    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:12.794651    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:12.794662    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:12.810375    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:12.810386    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:12.822098    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:12.822108    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:12.833655    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:12.833666    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:12.854113    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:12.854123    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:12.868568    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:12.868577    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:12.880905    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:12.880916    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:12.898779    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:12.898789    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:12.922451    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:12.922459    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:15.461875    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:17.680112    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:17.680243    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:17.691594    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:43:17.691673    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:17.710037    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:43:17.710118    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:17.732256    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:43:17.732323    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:17.745239    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:43:17.745306    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:17.756333    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:43:17.756401    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:17.767335    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:43:17.767405    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:17.779105    9870 logs.go:276] 0 containers: []
	W0805 04:43:17.779117    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:17.779174    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:17.789725    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:43:17.789743    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:43:17.789750    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:43:17.807465    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:17.807476    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:17.830027    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:17.830035    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:17.834148    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:43:17.834157    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:43:17.845931    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:43:17.845942    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:43:17.857638    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:17.857649    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:17.892498    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:43:17.892510    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:43:17.930509    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:43:17.930523    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:43:17.944698    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:43:17.944707    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:43:17.958894    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:43:17.958904    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:43:17.980466    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:43:17.980481    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:43:17.998645    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:43:17.998655    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:43:18.016052    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:43:18.016063    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:18.028091    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:43:18.028101    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:43:18.042463    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:43:18.042473    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:43:18.053549    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:43:18.053561    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:43:18.067500    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:18.067510    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:20.464277    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:20.464570    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:20.498752    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:20.498884    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:20.519430    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:20.519508    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:20.533564    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:20.533643    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:20.545325    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:20.545394    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:20.556360    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:20.556423    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:20.567436    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:20.567493    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:20.577662    9720 logs.go:276] 0 containers: []
	W0805 04:43:20.577678    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:20.577742    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:20.588409    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:20.588449    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:20.588454    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:20.600252    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:20.600266    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:20.614687    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:20.614695    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:20.626622    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:20.626632    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:20.638852    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:20.638865    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:20.651571    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:20.651582    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:20.664234    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:20.664244    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:20.688537    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:20.688549    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:20.713670    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:20.713680    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:20.725882    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:20.725896    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:20.730955    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:20.730964    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:20.767756    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:20.767767    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:20.783334    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:20.783350    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:20.820854    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:20.820871    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:20.835956    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:20.835967    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:20.604931    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:23.350298    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:25.607232    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:25.607390    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:25.618851    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:43:25.618920    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:25.633424    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:43:25.633485    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:25.643650    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:43:25.643716    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:25.655913    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:43:25.655979    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:25.666007    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:43:25.666064    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:25.681618    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:43:25.681682    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:25.691819    9870 logs.go:276] 0 containers: []
	W0805 04:43:25.691829    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:25.691878    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:25.702021    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:43:25.702042    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:25.702047    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:25.706211    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:43:25.706221    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:43:25.720609    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:43:25.720619    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:43:25.742330    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:43:25.742341    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:43:25.759671    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:43:25.759681    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:43:25.771002    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:43:25.771013    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:43:25.782180    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:25.782190    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:25.804611    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:25.804618    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:25.843724    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:43:25.843737    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:43:25.882416    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:43:25.882426    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:43:25.896801    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:43:25.896813    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:43:25.910223    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:43:25.910232    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:43:25.932294    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:43:25.932305    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:43:25.946561    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:43:25.946570    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:43:25.963554    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:43:25.963564    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:25.976252    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:25.976262    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:26.014611    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:43:26.014624    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:43:28.529479    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:28.352780    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:28.353066    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:28.381009    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:28.381135    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:28.398194    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:28.398276    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:28.411844    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:28.411917    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:28.423518    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:28.423612    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:28.434377    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:28.434446    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:28.445339    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:28.445406    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:28.455369    9720 logs.go:276] 0 containers: []
	W0805 04:43:28.455380    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:28.455439    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:28.472810    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:28.472827    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:28.472832    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:28.485246    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:28.485256    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:28.496901    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:28.496916    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:28.517712    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:28.517724    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:28.533311    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:28.533322    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:28.545672    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:28.545688    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:28.557211    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:28.557226    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:28.593347    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:28.593356    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:28.616057    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:28.616064    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:28.629355    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:28.629364    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:28.644379    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:28.644389    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:28.658297    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:28.658308    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:28.671867    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:28.671878    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:28.676431    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:28.676439    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:28.716532    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:28.716543    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:31.233013    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:33.531745    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:33.531829    9870 kubeadm.go:597] duration metric: took 4m4.115085833s to restartPrimaryControlPlane
	W0805 04:43:33.531905    9870 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 04:43:33.531944    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 04:43:34.598765    9870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.066797875s)
	I0805 04:43:34.598836    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 04:43:34.603632    9870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 04:43:34.606629    9870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 04:43:34.609283    9870 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 04:43:34.609289    9870 kubeadm.go:157] found existing configuration files:
	
	I0805 04:43:34.609313    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/admin.conf
	I0805 04:43:34.611699    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 04:43:34.611723    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 04:43:34.614718    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/kubelet.conf
	I0805 04:43:34.617554    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 04:43:34.617577    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 04:43:34.620231    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/controller-manager.conf
	I0805 04:43:34.623253    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 04:43:34.623273    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 04:43:34.626170    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/scheduler.conf
	I0805 04:43:34.628640    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 04:43:34.628659    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 04:43:34.631971    9870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 04:43:34.650461    9870 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 04:43:34.650530    9870 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 04:43:34.701823    9870 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 04:43:34.701903    9870 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 04:43:34.701973    9870 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 04:43:34.751892    9870 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 04:43:34.757145    9870 out.go:204]   - Generating certificates and keys ...
	I0805 04:43:34.757181    9870 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 04:43:34.757252    9870 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 04:43:34.757357    9870 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 04:43:34.757388    9870 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 04:43:34.757456    9870 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 04:43:34.757500    9870 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 04:43:34.757592    9870 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 04:43:34.757625    9870 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 04:43:34.757666    9870 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 04:43:34.757724    9870 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 04:43:34.757747    9870 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 04:43:34.757776    9870 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 04:43:34.843975    9870 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 04:43:34.960871    9870 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 04:43:35.022431    9870 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 04:43:35.144484    9870 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 04:43:35.173113    9870 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 04:43:35.173607    9870 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 04:43:35.173629    9870 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 04:43:35.261357    9870 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 04:43:36.235445    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:36.235542    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:36.247899    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:36.247986    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:36.259546    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:36.259623    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:36.271536    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:36.271608    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:36.282512    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:36.282631    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:36.294451    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:36.294522    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:36.306713    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:36.306779    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:36.318970    9720 logs.go:276] 0 containers: []
	W0805 04:43:36.318982    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:36.319046    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:36.331062    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:36.331080    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:36.331086    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:36.347551    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:36.347564    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:36.363428    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:36.363444    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:36.377835    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:36.377847    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:36.391702    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:36.391714    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:36.416880    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:36.416898    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:36.458124    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:36.458136    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:36.470883    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:36.470894    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:36.489906    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:36.489918    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:36.502490    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:36.502502    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:36.520277    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:36.520293    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:36.561840    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:36.561860    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:36.567018    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:36.567026    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:36.580894    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:36.580904    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:36.593787    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:36.593798    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:35.264512    9870 out.go:204]   - Booting up control plane ...
	I0805 04:43:35.264654    9870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 04:43:35.265188    9870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 04:43:35.266060    9870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 04:43:35.269489    9870 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 04:43:35.270269    9870 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 04:43:39.772615    9870 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501764 seconds
	I0805 04:43:39.772693    9870 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 04:43:39.776943    9870 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 04:43:40.299325    9870 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 04:43:40.299597    9870 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-528000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 04:43:40.802911    9870 kubeadm.go:310] [bootstrap-token] Using token: k9o0ky.p7snj7ic9optnkq4
	I0805 04:43:40.804380    9870 out.go:204]   - Configuring RBAC rules ...
	I0805 04:43:40.804442    9870 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 04:43:40.805000    9870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 04:43:40.808592    9870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 04:43:40.809814    9870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 04:43:40.810756    9870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 04:43:40.811606    9870 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 04:43:40.814703    9870 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 04:43:40.983063    9870 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 04:43:41.207305    9870 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 04:43:41.207824    9870 kubeadm.go:310] 
	I0805 04:43:41.207853    9870 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 04:43:41.207856    9870 kubeadm.go:310] 
	I0805 04:43:41.207893    9870 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 04:43:41.207898    9870 kubeadm.go:310] 
	I0805 04:43:41.207909    9870 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 04:43:41.207936    9870 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 04:43:41.207970    9870 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 04:43:41.207976    9870 kubeadm.go:310] 
	I0805 04:43:41.208003    9870 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 04:43:41.208007    9870 kubeadm.go:310] 
	I0805 04:43:41.208031    9870 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 04:43:41.208034    9870 kubeadm.go:310] 
	I0805 04:43:41.208064    9870 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 04:43:41.208106    9870 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 04:43:41.208140    9870 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 04:43:41.208146    9870 kubeadm.go:310] 
	I0805 04:43:41.208183    9870 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 04:43:41.208220    9870 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 04:43:41.208224    9870 kubeadm.go:310] 
	I0805 04:43:41.208267    9870 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k9o0ky.p7snj7ic9optnkq4 \
	I0805 04:43:41.208323    9870 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:00ad0c80a9f7b4b654bf16d7fdaf8cb3872452317480a453e3b9036c421b1809 \
	I0805 04:43:41.208337    9870 kubeadm.go:310] 	--control-plane 
	I0805 04:43:41.208341    9870 kubeadm.go:310] 
	I0805 04:43:41.208385    9870 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 04:43:41.208389    9870 kubeadm.go:310] 
	I0805 04:43:41.208448    9870 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k9o0ky.p7snj7ic9optnkq4 \
	I0805 04:43:41.208504    9870 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:00ad0c80a9f7b4b654bf16d7fdaf8cb3872452317480a453e3b9036c421b1809 
	I0805 04:43:41.208644    9870 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 04:43:41.208654    9870 cni.go:84] Creating CNI manager for ""
	I0805 04:43:41.208666    9870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:43:41.211969    9870 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 04:43:41.218093    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 04:43:41.220931    9870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 04:43:41.225974    9870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 04:43:41.226015    9870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 04:43:41.226036    9870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-528000 minikube.k8s.io/updated_at=2024_08_05T04_43_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=stopped-upgrade-528000 minikube.k8s.io/primary=true
	I0805 04:43:41.266820    9870 kubeadm.go:1113] duration metric: took 40.838292ms to wait for elevateKubeSystemPrivileges
	I0805 04:43:41.266835    9870 ops.go:34] apiserver oom_adj: -16
	I0805 04:43:41.266840    9870 kubeadm.go:394] duration metric: took 4m11.863592666s to StartCluster
	I0805 04:43:41.266850    9870 settings.go:142] acquiring lock: {Name:mk4ccaf175b574f554efa4f63e0208c978f3f34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:43:41.266940    9870 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:43:41.267374    9870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/kubeconfig: {Name:mk9388f295704cbd2679ba0e5c0bb91678f79ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:43:41.267587    9870 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:43:41.267642    9870 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 04:43:41.267682    9870 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-528000"
	I0805 04:43:41.267690    9870 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-528000"
	I0805 04:43:41.267696    9870 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:43:41.267705    9870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-528000"
	I0805 04:43:41.267694    9870 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-528000"
	W0805 04:43:41.267767    9870 addons.go:243] addon storage-provisioner should already be in state true
	I0805 04:43:41.267778    9870 host.go:66] Checking if "stopped-upgrade-528000" exists ...
	I0805 04:43:41.272048    9870 out.go:177] * Verifying Kubernetes components...
	I0805 04:43:41.272724    9870 kapi.go:59] client config for stopped-upgrade-528000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/client.key", CAFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1024d01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 04:43:41.276228    9870 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-528000"
	W0805 04:43:41.276233    9870 addons.go:243] addon default-storageclass should already be in state true
	I0805 04:43:41.276241    9870 host.go:66] Checking if "stopped-upgrade-528000" exists ...
	I0805 04:43:41.276819    9870 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 04:43:41.276825    9870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 04:43:41.276830    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	I0805 04:43:41.279995    9870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:43:39.109488    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:41.287304    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:43:41.287332    9870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 04:43:41.287345    9870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 04:43:41.287353    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	I0805 04:43:41.373757    9870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 04:43:41.378614    9870 api_server.go:52] waiting for apiserver process to appear ...
	I0805 04:43:41.378656    9870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:43:41.382491    9870 api_server.go:72] duration metric: took 114.892292ms to wait for apiserver process to appear ...
	I0805 04:43:41.382499    9870 api_server.go:88] waiting for apiserver healthz status ...
	I0805 04:43:41.382506    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:41.392326    9870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 04:43:41.455709    9870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 04:43:44.111953    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:44.112093    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:44.128160    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:44.128230    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:44.140469    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:44.140536    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:44.151409    9720 logs.go:276] 4 containers: [1b9570e90766 ef130aa43104 8ef432b0c449 5e231bd101ad]
	I0805 04:43:44.151483    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:44.161971    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:44.162034    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:44.180436    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:44.180503    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:44.190929    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:44.190999    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:44.201028    9720 logs.go:276] 0 containers: []
	W0805 04:43:44.201040    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:44.201097    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:44.211501    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:44.211521    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:44.211527    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:44.229522    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:44.229533    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:44.241651    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:44.241661    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:44.261815    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:44.261825    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:44.275848    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:44.275859    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:44.290492    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:44.290502    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:44.302159    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:44.302170    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:44.313264    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:44.313274    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:44.328284    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:44.328294    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:44.351124    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:44.351133    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:44.355553    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:44.355559    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:44.391214    9720 logs.go:123] Gathering logs for coredns [8ef432b0c449] ...
	I0805 04:43:44.391225    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef432b0c449"
	I0805 04:43:44.402836    9720 logs.go:123] Gathering logs for coredns [5e231bd101ad] ...
	I0805 04:43:44.402846    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e231bd101ad"
	I0805 04:43:44.414502    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:44.414514    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:44.426531    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:44.426542    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:46.965581    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:46.384772    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:46.384859    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:51.968007    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:51.968136    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:51.984276    9720 logs.go:276] 1 containers: [474948e38f63]
	I0805 04:43:51.984348    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:51.995799    9720 logs.go:276] 1 containers: [cab8cdbbec39]
	I0805 04:43:51.995873    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:52.006754    9720 logs.go:276] 4 containers: [396dbef8c681 4b67b31cb033 1b9570e90766 ef130aa43104]
	I0805 04:43:52.006826    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:52.017769    9720 logs.go:276] 1 containers: [cc9a2ca90252]
	I0805 04:43:52.017833    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:52.028209    9720 logs.go:276] 1 containers: [213143049c1d]
	I0805 04:43:52.028275    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:52.042680    9720 logs.go:276] 1 containers: [1db105ef1072]
	I0805 04:43:52.042745    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:52.054670    9720 logs.go:276] 0 containers: []
	W0805 04:43:52.054683    9720 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:52.054742    9720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:52.065267    9720 logs.go:276] 1 containers: [a30c7f188dc3]
	I0805 04:43:52.065285    9720 logs.go:123] Gathering logs for kube-apiserver [474948e38f63] ...
	I0805 04:43:52.065291    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 474948e38f63"
	I0805 04:43:52.079955    9720 logs.go:123] Gathering logs for coredns [ef130aa43104] ...
	I0805 04:43:52.079965    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef130aa43104"
	I0805 04:43:52.092274    9720 logs.go:123] Gathering logs for coredns [396dbef8c681] ...
	I0805 04:43:52.092285    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dbef8c681"
	I0805 04:43:52.105406    9720 logs.go:123] Gathering logs for coredns [4b67b31cb033] ...
	I0805 04:43:52.105418    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b67b31cb033"
	I0805 04:43:52.116772    9720 logs.go:123] Gathering logs for kube-scheduler [cc9a2ca90252] ...
	I0805 04:43:52.116787    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc9a2ca90252"
	I0805 04:43:52.132999    9720 logs.go:123] Gathering logs for kube-proxy [213143049c1d] ...
	I0805 04:43:52.133011    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213143049c1d"
	I0805 04:43:52.145750    9720 logs.go:123] Gathering logs for kube-controller-manager [1db105ef1072] ...
	I0805 04:43:52.145760    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db105ef1072"
	I0805 04:43:52.163139    9720 logs.go:123] Gathering logs for storage-provisioner [a30c7f188dc3] ...
	I0805 04:43:52.163151    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30c7f188dc3"
	I0805 04:43:52.175385    9720 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:52.175396    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:52.214479    9720 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:52.214491    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:52.219191    9720 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:52.219197    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:52.242253    9720 logs.go:123] Gathering logs for container status ...
	I0805 04:43:52.242261    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:52.255058    9720 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:52.255069    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:52.291546    9720 logs.go:123] Gathering logs for etcd [cab8cdbbec39] ...
	I0805 04:43:52.291557    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab8cdbbec39"
	I0805 04:43:52.305789    9720 logs.go:123] Gathering logs for coredns [1b9570e90766] ...
	I0805 04:43:52.305799    9720 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b9570e90766"
	I0805 04:43:51.385648    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:51.385672    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:54.820492    9720 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:59.822796    9720 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:59.826283    9720 out.go:177] 
	W0805 04:43:59.831157    9720 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0805 04:43:59.831167    9720 out.go:239] * 
	W0805 04:43:59.831801    9720 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:43:59.842099    9720 out.go:177] 
	I0805 04:43:56.386206    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:56.386243    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:01.387385    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:01.387421    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:06.388426    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:06.388477    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:11.389750    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:11.389787    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 04:44:11.746142    9870 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 04:44:11.749411    9870 out.go:177] * Enabled addons: storage-provisioner
	I0805 04:44:11.758092    9870 addons.go:510] duration metric: took 30.490183125s for enable addons: enabled=[storage-provisioner]
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-08-05 11:35:00 UTC, ends at Mon 2024-08-05 11:44:15 UTC. --
	Aug 05 11:43:52 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:43:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 11:43:57 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:43:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 11:44:00 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:00Z" level=error msg="ContainerStats resp: {0x400009c840 linux}"
	Aug 05 11:44:00 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:00Z" level=error msg="ContainerStats resp: {0x40008b05c0 linux}"
	Aug 05 11:44:01 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:01Z" level=error msg="ContainerStats resp: {0x4000760080 linux}"
	Aug 05 11:44:02 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:02Z" level=error msg="ContainerStats resp: {0x4000760840 linux}"
	Aug 05 11:44:02 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:02Z" level=error msg="ContainerStats resp: {0x4000760e80 linux}"
	Aug 05 11:44:02 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:02Z" level=error msg="ContainerStats resp: {0x4000761280 linux}"
	Aug 05 11:44:02 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:02Z" level=error msg="ContainerStats resp: {0x4000761440 linux}"
	Aug 05 11:44:02 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:02Z" level=error msg="ContainerStats resp: {0x4000761ac0 linux}"
	Aug 05 11:44:02 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:02Z" level=error msg="ContainerStats resp: {0x40006c2a00 linux}"
	Aug 05 11:44:02 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:02Z" level=error msg="ContainerStats resp: {0x40006c2f40 linux}"
	Aug 05 11:44:02 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 11:44:07 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 11:44:12 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:12Z" level=error msg="ContainerStats resp: {0x40004b4e00 linux}"
	Aug 05 11:44:12 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:12Z" level=error msg="ContainerStats resp: {0x40004b5a40 linux}"
	Aug 05 11:44:12 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:12Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 11:44:13 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:13Z" level=error msg="ContainerStats resp: {0x40006c3900 linux}"
	Aug 05 11:44:14 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:14Z" level=error msg="ContainerStats resp: {0x40008c02c0 linux}"
	Aug 05 11:44:14 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:14Z" level=error msg="ContainerStats resp: {0x40008c0680 linux}"
	Aug 05 11:44:14 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:14Z" level=error msg="ContainerStats resp: {0x4000604700 linux}"
	Aug 05 11:44:14 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:14Z" level=error msg="ContainerStats resp: {0x40008c1480 linux}"
	Aug 05 11:44:14 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:14Z" level=error msg="ContainerStats resp: {0x4000604ac0 linux}"
	Aug 05 11:44:14 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:14Z" level=error msg="ContainerStats resp: {0x40008c1e00 linux}"
	Aug 05 11:44:14 running-upgrade-763000 cri-dockerd[3067]: time="2024-08-05T11:44:14Z" level=error msg="ContainerStats resp: {0x4000530a80 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	396dbef8c681b       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   460c5829ca2ac
	4b67b31cb033c       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   7a03e492f5ec1
	1b9570e90766d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   7a03e492f5ec1
	ef130aa431047       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   460c5829ca2ac
	213143049c1de       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   fba6331bda902
	a30c7f188dc3b       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   a95db4cb9b82f
	cc9a2ca90252b       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   705fb319add0d
	1db105ef1072b       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   faefa192d7808
	474948e38f63b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   ecdf15472ca9b
	cab8cdbbec394       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   5c422651224eb
	
	
	==> coredns [1b9570e90766] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7845054106090580427.636141393282496086. HINFO: read udp 10.244.0.2:42888->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7845054106090580427.636141393282496086. HINFO: read udp 10.244.0.2:59846->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7845054106090580427.636141393282496086. HINFO: read udp 10.244.0.2:53522->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7845054106090580427.636141393282496086. HINFO: read udp 10.244.0.2:56422->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7845054106090580427.636141393282496086. HINFO: read udp 10.244.0.2:40637->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7845054106090580427.636141393282496086. HINFO: read udp 10.244.0.2:48187->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7845054106090580427.636141393282496086. HINFO: read udp 10.244.0.2:58117->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7845054106090580427.636141393282496086. HINFO: read udp 10.244.0.2:34233->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7845054106090580427.636141393282496086. HINFO: read udp 10.244.0.2:57902->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7845054106090580427.636141393282496086. HINFO: read udp 10.244.0.2:46772->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [396dbef8c681] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5586790165587611988.2205546727198479031. HINFO: read udp 10.244.0.3:60583->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5586790165587611988.2205546727198479031. HINFO: read udp 10.244.0.3:60327->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5586790165587611988.2205546727198479031. HINFO: read udp 10.244.0.3:39791->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5586790165587611988.2205546727198479031. HINFO: read udp 10.244.0.3:41743->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5586790165587611988.2205546727198479031. HINFO: read udp 10.244.0.3:47651->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5586790165587611988.2205546727198479031. HINFO: read udp 10.244.0.3:36674->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5586790165587611988.2205546727198479031. HINFO: read udp 10.244.0.3:56112->10.0.2.3:53: i/o timeout
	
	
	==> coredns [4b67b31cb033] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2829484989069528807.1241822304250302751. HINFO: read udp 10.244.0.2:40802->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2829484989069528807.1241822304250302751. HINFO: read udp 10.244.0.2:37660->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2829484989069528807.1241822304250302751. HINFO: read udp 10.244.0.2:41380->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2829484989069528807.1241822304250302751. HINFO: read udp 10.244.0.2:38578->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2829484989069528807.1241822304250302751. HINFO: read udp 10.244.0.2:60543->10.0.2.3:53: i/o timeout
	
	
	==> coredns [ef130aa43104] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3299304103011808649.7390019188803374685. HINFO: read udp 10.244.0.3:48198->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3299304103011808649.7390019188803374685. HINFO: read udp 10.244.0.3:32818->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3299304103011808649.7390019188803374685. HINFO: read udp 10.244.0.3:40282->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3299304103011808649.7390019188803374685. HINFO: read udp 10.244.0.3:51286->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3299304103011808649.7390019188803374685. HINFO: read udp 10.244.0.3:57750->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3299304103011808649.7390019188803374685. HINFO: read udp 10.244.0.3:48987->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3299304103011808649.7390019188803374685. HINFO: read udp 10.244.0.3:47839->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3299304103011808649.7390019188803374685. HINFO: read udp 10.244.0.3:33721->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3299304103011808649.7390019188803374685. HINFO: read udp 10.244.0.3:50911->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3299304103011808649.7390019188803374685. HINFO: read udp 10.244.0.3:48133->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-763000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-763000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
	                    minikube.k8s.io/name=running-upgrade-763000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T04_39_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 11:39:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-763000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 11:44:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 11:39:58 +0000   Mon, 05 Aug 2024 11:39:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 11:39:58 +0000   Mon, 05 Aug 2024 11:39:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 11:39:58 +0000   Mon, 05 Aug 2024 11:39:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 11:39:58 +0000   Mon, 05 Aug 2024 11:39:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-763000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 15e6ccfb9a854e75917cc82951917265
	  System UUID:                15e6ccfb9a854e75917cc82951917265
	  Boot ID:                    60756a80-f07d-4bc0-a2fa-6aa4206e0ec8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4mp4m                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-dr97d                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-763000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-763000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-763000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-86wr8                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-763000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-763000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-763000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-763000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-763000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-763000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-763000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-763000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-763000 event: Registered Node running-upgrade-763000 in Controller
	
	
	==> dmesg <==
	[  +1.719754] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.067928] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.085637] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.137616] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.081129] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.077513] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.755628] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +8.653997] systemd-fstab-generator[1942]: Ignoring "noauto" for root device
	[  +2.685850] systemd-fstab-generator[2222]: Ignoring "noauto" for root device
	[  +0.112886] systemd-fstab-generator[2256]: Ignoring "noauto" for root device
	[  +0.093682] systemd-fstab-generator[2267]: Ignoring "noauto" for root device
	[  +0.093427] systemd-fstab-generator[2280]: Ignoring "noauto" for root device
	[ +12.529130] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.205157] systemd-fstab-generator[3021]: Ignoring "noauto" for root device
	[  +0.078008] systemd-fstab-generator[3035]: Ignoring "noauto" for root device
	[  +0.083392] systemd-fstab-generator[3046]: Ignoring "noauto" for root device
	[  +0.087585] systemd-fstab-generator[3060]: Ignoring "noauto" for root device
	[  +2.457618] systemd-fstab-generator[3213]: Ignoring "noauto" for root device
	[  +2.598220] systemd-fstab-generator[3605]: Ignoring "noauto" for root device
	[  +0.966059] systemd-fstab-generator[3733]: Ignoring "noauto" for root device
	[Aug 5 11:36] kauditd_printk_skb: 68 callbacks suppressed
	[Aug 5 11:39] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.491082] systemd-fstab-generator[11902]: Ignoring "noauto" for root device
	[  +5.151433] systemd-fstab-generator[12483]: Ignoring "noauto" for root device
	[  +0.478920] systemd-fstab-generator[12618]: Ignoring "noauto" for root device
	
	
	==> etcd [cab8cdbbec39] <==
	{"level":"info","ts":"2024-08-05T11:39:54.357Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-05T11:39:54.358Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T11:39:54.358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-05T11:39:54.358Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-05T11:39:54.358Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-05T11:39:54.358Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-05T11:39:54.358Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T11:39:54.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T11:39:54.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T11:39:54.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-05T11:39:54.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T11:39:54.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-05T11:39:54.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T11:39:54.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-05T11:39:54.545Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T11:39:54.546Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T11:39:54.546Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T11:39:54.551Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T11:39:54.559Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-05T11:39:54.563Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T11:39:54.563Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T11:39:54.545Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-763000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T11:39:54.563Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T11:39:54.563Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T11:39:54.563Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:44:16 up 9 min,  0 users,  load average: 0.27, 0.33, 0.18
	Linux running-upgrade-763000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [474948e38f63] <==
	I0805 11:39:56.261518       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0805 11:39:56.261537       1 cache.go:39] Caches are synced for autoregister controller
	I0805 11:39:56.261672       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 11:39:56.261790       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 11:39:56.264192       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0805 11:39:56.264307       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0805 11:39:56.265931       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0805 11:39:56.993467       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0805 11:39:57.169023       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0805 11:39:57.172234       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0805 11:39:57.172268       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 11:39:57.303486       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 11:39:57.317533       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 11:39:57.426745       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0805 11:39:57.428578       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0805 11:39:57.428922       1 controller.go:611] quota admission added evaluator for: endpoints
	I0805 11:39:57.430314       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 11:39:58.291556       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0805 11:39:58.631540       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0805 11:39:58.635110       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0805 11:39:58.639625       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0805 11:39:58.685758       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 11:40:11.995829       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0805 11:40:12.043375       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0805 11:40:12.631051       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [1db105ef1072] <==
	I0805 11:40:11.149770       1 shared_informer.go:262] Caches are synced for crt configmap
	I0805 11:40:11.193537       1 shared_informer.go:262] Caches are synced for PVC protection
	I0805 11:40:11.243050       1 shared_informer.go:262] Caches are synced for deployment
	I0805 11:40:11.243055       1 shared_informer.go:262] Caches are synced for disruption
	I0805 11:40:11.243186       1 disruption.go:371] Sending events to api server.
	I0805 11:40:11.246893       1 shared_informer.go:262] Caches are synced for resource quota
	I0805 11:40:11.302582       1 shared_informer.go:262] Caches are synced for resource quota
	I0805 11:40:11.320028       1 shared_informer.go:262] Caches are synced for attach detach
	I0805 11:40:11.329270       1 shared_informer.go:262] Caches are synced for daemon sets
	I0805 11:40:11.342417       1 shared_informer.go:262] Caches are synced for taint
	I0805 11:40:11.342546       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0805 11:40:11.342580       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-763000. Assuming now as a timestamp.
	I0805 11:40:11.342598       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0805 11:40:11.342693       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0805 11:40:11.342824       1 event.go:294] "Event occurred" object="running-upgrade-763000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-763000 event: Registered Node running-upgrade-763000 in Controller"
	I0805 11:40:11.343567       1 shared_informer.go:262] Caches are synced for persistent volume
	I0805 11:40:11.344749       1 shared_informer.go:262] Caches are synced for PV protection
	I0805 11:40:11.393488       1 shared_informer.go:262] Caches are synced for expand
	I0805 11:40:11.761351       1 shared_informer.go:262] Caches are synced for garbage collector
	I0805 11:40:11.813138       1 shared_informer.go:262] Caches are synced for garbage collector
	I0805 11:40:11.813151       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0805 11:40:11.998729       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0805 11:40:12.045682       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-86wr8"
	I0805 11:40:12.148301       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dr97d"
	I0805 11:40:12.153575       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-4mp4m"
	
	
	==> kube-proxy [213143049c1d] <==
	I0805 11:40:12.612819       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0805 11:40:12.612843       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0805 11:40:12.612851       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0805 11:40:12.628534       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0805 11:40:12.628543       1 server_others.go:206] "Using iptables Proxier"
	I0805 11:40:12.628558       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0805 11:40:12.628772       1 server.go:661] "Version info" version="v1.24.1"
	I0805 11:40:12.628775       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 11:40:12.629038       1 config.go:317] "Starting service config controller"
	I0805 11:40:12.629044       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0805 11:40:12.629052       1 config.go:226] "Starting endpoint slice config controller"
	I0805 11:40:12.629054       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0805 11:40:12.630230       1 config.go:444] "Starting node config controller"
	I0805 11:40:12.630233       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0805 11:40:12.729834       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0805 11:40:12.729860       1 shared_informer.go:262] Caches are synced for service config
	I0805 11:40:12.730297       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [cc9a2ca90252] <==
	W0805 11:39:56.223571       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 11:39:56.223609       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 11:39:56.223646       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 11:39:56.223676       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 11:39:56.223712       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 11:39:56.223731       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 11:39:56.223778       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 11:39:56.223803       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 11:39:56.223849       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 11:39:56.223872       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 11:39:56.223917       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 11:39:56.223977       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 11:39:57.064232       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 11:39:57.064319       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 11:39:57.064795       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 11:39:57.064841       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 11:39:57.135992       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 11:39:57.136056       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 11:39:57.147030       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 11:39:57.147064       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 11:39:57.150833       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 11:39:57.150859       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 11:39:57.247973       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 11:39:57.248061       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 11:39:59.416232       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-08-05 11:35:00 UTC, ends at Mon 2024-08-05 11:44:16 UTC. --
	Aug 05 11:40:00 running-upgrade-763000 kubelet[12489]: E0805 11:40:00.464161   12489 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-763000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-763000"
	Aug 05 11:40:00 running-upgrade-763000 kubelet[12489]: E0805 11:40:00.662461   12489 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-763000\" already exists" pod="kube-system/etcd-running-upgrade-763000"
	Aug 05 11:40:00 running-upgrade-763000 kubelet[12489]: I0805 11:40:00.856907   12489 request.go:601] Waited for 1.119899053s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 05 11:40:00 running-upgrade-763000 kubelet[12489]: E0805 11:40:00.860638   12489 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-763000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-763000"
	Aug 05 11:40:11 running-upgrade-763000 kubelet[12489]: I0805 11:40:11.161431   12489 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 05 11:40:11 running-upgrade-763000 kubelet[12489]: I0805 11:40:11.161840   12489 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 05 11:40:11 running-upgrade-763000 kubelet[12489]: I0805 11:40:11.348627   12489 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 11:40:11 running-upgrade-763000 kubelet[12489]: I0805 11:40:11.463204   12489 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqvx\" (UniqueName: \"kubernetes.io/projected/e4ce8c98-f8be-4cd2-b037-e624e8f1f3e6-kube-api-access-9gqvx\") pod \"storage-provisioner\" (UID: \"e4ce8c98-f8be-4cd2-b037-e624e8f1f3e6\") " pod="kube-system/storage-provisioner"
	Aug 05 11:40:11 running-upgrade-763000 kubelet[12489]: I0805 11:40:11.463235   12489 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e4ce8c98-f8be-4cd2-b037-e624e8f1f3e6-tmp\") pod \"storage-provisioner\" (UID: \"e4ce8c98-f8be-4cd2-b037-e624e8f1f3e6\") " pod="kube-system/storage-provisioner"
	Aug 05 11:40:11 running-upgrade-763000 kubelet[12489]: E0805 11:40:11.567420   12489 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 05 11:40:11 running-upgrade-763000 kubelet[12489]: E0805 11:40:11.567441   12489 projected.go:192] Error preparing data for projected volume kube-api-access-9gqvx for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 05 11:40:11 running-upgrade-763000 kubelet[12489]: E0805 11:40:11.567475   12489 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/e4ce8c98-f8be-4cd2-b037-e624e8f1f3e6-kube-api-access-9gqvx podName:e4ce8c98-f8be-4cd2-b037-e624e8f1f3e6 nodeName:}" failed. No retries permitted until 2024-08-05 11:40:12.067461916 +0000 UTC m=+13.449358584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9gqvx" (UniqueName: "kubernetes.io/projected/e4ce8c98-f8be-4cd2-b037-e624e8f1f3e6-kube-api-access-9gqvx") pod "storage-provisioner" (UID: "e4ce8c98-f8be-4cd2-b037-e624e8f1f3e6") : configmap "kube-root-ca.crt" not found
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.047606   12489 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.067079   12489 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b57e5cee-0f54-4f5d-84a3-42c8d0809cb0-xtables-lock\") pod \"kube-proxy-86wr8\" (UID: \"b57e5cee-0f54-4f5d-84a3-42c8d0809cb0\") " pod="kube-system/kube-proxy-86wr8"
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.067106   12489 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkfjq\" (UniqueName: \"kubernetes.io/projected/b57e5cee-0f54-4f5d-84a3-42c8d0809cb0-kube-api-access-xkfjq\") pod \"kube-proxy-86wr8\" (UID: \"b57e5cee-0f54-4f5d-84a3-42c8d0809cb0\") " pod="kube-system/kube-proxy-86wr8"
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.067145   12489 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b57e5cee-0f54-4f5d-84a3-42c8d0809cb0-lib-modules\") pod \"kube-proxy-86wr8\" (UID: \"b57e5cee-0f54-4f5d-84a3-42c8d0809cb0\") " pod="kube-system/kube-proxy-86wr8"
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.067156   12489 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b57e5cee-0f54-4f5d-84a3-42c8d0809cb0-kube-proxy\") pod \"kube-proxy-86wr8\" (UID: \"b57e5cee-0f54-4f5d-84a3-42c8d0809cb0\") " pod="kube-system/kube-proxy-86wr8"
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.149000   12489 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.158953   12489 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.268063   12489 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l97j\" (UniqueName: \"kubernetes.io/projected/4354229a-6949-4cca-a0f0-55d414d8c6fd-kube-api-access-5l97j\") pod \"coredns-6d4b75cb6d-4mp4m\" (UID: \"4354229a-6949-4cca-a0f0-55d414d8c6fd\") " pod="kube-system/coredns-6d4b75cb6d-4mp4m"
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.268112   12489 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21d030cf-1014-4e27-a3a6-754cf6e2a804-config-volume\") pod \"coredns-6d4b75cb6d-dr97d\" (UID: \"21d030cf-1014-4e27-a3a6-754cf6e2a804\") " pod="kube-system/coredns-6d4b75cb6d-dr97d"
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.268127   12489 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4354229a-6949-4cca-a0f0-55d414d8c6fd-config-volume\") pod \"coredns-6d4b75cb6d-4mp4m\" (UID: \"4354229a-6949-4cca-a0f0-55d414d8c6fd\") " pod="kube-system/coredns-6d4b75cb6d-4mp4m"
	Aug 05 11:40:12 running-upgrade-763000 kubelet[12489]: I0805 11:40:12.268139   12489 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92q6h\" (UniqueName: \"kubernetes.io/projected/21d030cf-1014-4e27-a3a6-754cf6e2a804-kube-api-access-92q6h\") pod \"coredns-6d4b75cb6d-dr97d\" (UID: \"21d030cf-1014-4e27-a3a6-754cf6e2a804\") " pod="kube-system/coredns-6d4b75cb6d-dr97d"
	Aug 05 11:43:51 running-upgrade-763000 kubelet[12489]: I0805 11:43:51.028041   12489 scope.go:110] "RemoveContainer" containerID="8ef432b0c44959276b3bd31be62d9c74c70210138058cc58cc1cb69acc1a1fae"
	Aug 05 11:43:51 running-upgrade-763000 kubelet[12489]: I0805 11:43:51.039725   12489 scope.go:110] "RemoveContainer" containerID="5e231bd101ad7e48116d7db4b22b52d5f4ea015ba3a79f89ea5848fbbe1d359b"
	
	
	==> storage-provisioner [a30c7f188dc3] <==
	I0805 11:40:12.472052       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 11:40:12.483963       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 11:40:12.484007       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 11:40:12.496905       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 11:40:12.496981       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-763000_479641f8-de45-44e1-b987-e0170c48b94f!
	I0805 11:40:12.498181       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b0e5ec69-53fa-432c-9ae4-e2b71c051df0", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-763000_479641f8-de45-44e1-b987-e0170c48b94f became leader
	I0805 11:40:12.598657       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-763000_479641f8-de45-44e1-b987-e0170c48b94f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-763000 -n running-upgrade-763000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-763000 -n running-upgrade-763000: exit status 2 (15.614130458s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-763000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-763000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-763000
--- FAIL: TestRunningBinaryUpgrade (595.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-767000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-767000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.905432709s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-767000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-767000" primary control-plane node in "kubernetes-upgrade-767000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-767000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:37:36.909676    9794 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:37:36.909852    9794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:37:36.909856    9794 out.go:304] Setting ErrFile to fd 2...
	I0805 04:37:36.909858    9794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:37:36.909995    9794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:37:36.911364    9794 out.go:298] Setting JSON to false
	I0805 04:37:36.929736    9794 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5826,"bootTime":1722852030,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:37:36.929825    9794 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:37:36.934761    9794 out.go:177] * [kubernetes-upgrade-767000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:37:36.941753    9794 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:37:36.941869    9794 notify.go:220] Checking for updates...
	I0805 04:37:36.947637    9794 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:37:36.950686    9794 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:37:36.953698    9794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:37:36.956606    9794 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:37:36.959680    9794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:37:36.963011    9794 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:37:36.963076    9794 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:37:36.963126    9794 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:37:36.966643    9794 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:37:36.973643    9794 start.go:297] selected driver: qemu2
	I0805 04:37:36.973652    9794 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:37:36.973659    9794 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:37:36.976247    9794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:37:36.978656    9794 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:37:36.981735    9794 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 04:37:36.981770    9794 cni.go:84] Creating CNI manager for ""
	I0805 04:37:36.981777    9794 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 04:37:36.981815    9794 start.go:340] cluster config:
	{Name:kubernetes-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:37:36.985843    9794 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:37:36.992595    9794 out.go:177] * Starting "kubernetes-upgrade-767000" primary control-plane node in "kubernetes-upgrade-767000" cluster
	I0805 04:37:36.996658    9794 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 04:37:36.996688    9794 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 04:37:36.996701    9794 cache.go:56] Caching tarball of preloaded images
	I0805 04:37:36.996789    9794 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:37:36.996795    9794 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 04:37:36.996857    9794 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/kubernetes-upgrade-767000/config.json ...
	I0805 04:37:36.996868    9794 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/kubernetes-upgrade-767000/config.json: {Name:mkd993642cc6220992f5fb0741746dddaedb02db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:37:36.997146    9794 start.go:360] acquireMachinesLock for kubernetes-upgrade-767000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:37:36.997180    9794 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "kubernetes-upgrade-767000"
	I0805 04:37:36.997190    9794 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:37:36.997215    9794 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:37:37.001691    9794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:37:37.018223    9794 start.go:159] libmachine.API.Create for "kubernetes-upgrade-767000" (driver="qemu2")
	I0805 04:37:37.018254    9794 client.go:168] LocalClient.Create starting
	I0805 04:37:37.018338    9794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:37:37.018376    9794 main.go:141] libmachine: Decoding PEM data...
	I0805 04:37:37.018384    9794 main.go:141] libmachine: Parsing certificate...
	I0805 04:37:37.018433    9794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:37:37.018460    9794 main.go:141] libmachine: Decoding PEM data...
	I0805 04:37:37.018469    9794 main.go:141] libmachine: Parsing certificate...
	I0805 04:37:37.018847    9794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:37:37.170738    9794 main.go:141] libmachine: Creating SSH key...
	I0805 04:37:37.303909    9794 main.go:141] libmachine: Creating Disk image...
	I0805 04:37:37.303921    9794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:37:37.304188    9794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2
	I0805 04:37:37.314196    9794 main.go:141] libmachine: STDOUT: 
	I0805 04:37:37.314219    9794 main.go:141] libmachine: STDERR: 
	I0805 04:37:37.314275    9794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2 +20000M
	I0805 04:37:37.322832    9794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:37:37.322850    9794 main.go:141] libmachine: STDERR: 
	I0805 04:37:37.322869    9794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2
	I0805 04:37:37.322875    9794 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:37:37.322888    9794 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:37:37.322914    9794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b5:5c:41:db:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2
	I0805 04:37:37.324711    9794 main.go:141] libmachine: STDOUT: 
	I0805 04:37:37.324727    9794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:37:37.324745    9794 client.go:171] duration metric: took 306.483208ms to LocalClient.Create
	I0805 04:37:39.326970    9794 start.go:128] duration metric: took 2.329708291s to createHost
	I0805 04:37:39.327043    9794 start.go:83] releasing machines lock for "kubernetes-upgrade-767000", held for 2.329832041s
	W0805 04:37:39.327174    9794 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:37:39.334367    9794 out.go:177] * Deleting "kubernetes-upgrade-767000" in qemu2 ...
	W0805 04:37:39.359142    9794 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:37:39.359171    9794 start.go:729] Will try again in 5 seconds ...
	I0805 04:37:44.361398    9794 start.go:360] acquireMachinesLock for kubernetes-upgrade-767000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:37:44.362018    9794 start.go:364] duration metric: took 499.584µs to acquireMachinesLock for "kubernetes-upgrade-767000"
	I0805 04:37:44.362140    9794 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:37:44.362401    9794 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:37:44.371415    9794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:37:44.418152    9794 start.go:159] libmachine.API.Create for "kubernetes-upgrade-767000" (driver="qemu2")
	I0805 04:37:44.418198    9794 client.go:168] LocalClient.Create starting
	I0805 04:37:44.418326    9794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:37:44.418427    9794 main.go:141] libmachine: Decoding PEM data...
	I0805 04:37:44.418445    9794 main.go:141] libmachine: Parsing certificate...
	I0805 04:37:44.418504    9794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:37:44.418560    9794 main.go:141] libmachine: Decoding PEM data...
	I0805 04:37:44.418574    9794 main.go:141] libmachine: Parsing certificate...
	I0805 04:37:44.419362    9794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:37:44.592555    9794 main.go:141] libmachine: Creating SSH key...
	I0805 04:37:44.714678    9794 main.go:141] libmachine: Creating Disk image...
	I0805 04:37:44.714688    9794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:37:44.715031    9794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2
	I0805 04:37:44.725062    9794 main.go:141] libmachine: STDOUT: 
	I0805 04:37:44.725082    9794 main.go:141] libmachine: STDERR: 
	I0805 04:37:44.725164    9794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2 +20000M
	I0805 04:37:44.734394    9794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:37:44.734415    9794 main.go:141] libmachine: STDERR: 
	I0805 04:37:44.734436    9794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2
	I0805 04:37:44.734441    9794 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:37:44.734453    9794 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:37:44.734502    9794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ff:10:c9:b5:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2
	I0805 04:37:44.736584    9794 main.go:141] libmachine: STDOUT: 
	I0805 04:37:44.736607    9794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:37:44.736620    9794 client.go:171] duration metric: took 318.412291ms to LocalClient.Create
	I0805 04:37:46.738918    9794 start.go:128] duration metric: took 2.376460625s to createHost
	I0805 04:37:46.738984    9794 start.go:83] releasing machines lock for "kubernetes-upgrade-767000", held for 2.376891167s
	W0805 04:37:46.739299    9794 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-767000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-767000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:37:46.752614    9794 out.go:177] 
	W0805 04:37:46.755672    9794 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:37:46.755723    9794 out.go:239] * 
	* 
	W0805 04:37:46.758618    9794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:37:46.770366    9794 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-767000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-767000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-767000: (3.440291125s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-767000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-767000 status --format={{.Host}}: exit status 7 (36.184834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-767000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-767000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182511s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-767000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-767000" primary control-plane node in "kubernetes-upgrade-767000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-767000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-767000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:37:50.290917    9834 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:37:50.291068    9834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:37:50.291072    9834 out.go:304] Setting ErrFile to fd 2...
	I0805 04:37:50.291074    9834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:37:50.291220    9834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:37:50.292217    9834 out.go:298] Setting JSON to false
	I0805 04:37:50.309226    9834 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5840,"bootTime":1722852030,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:37:50.309405    9834 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:37:50.314043    9834 out.go:177] * [kubernetes-upgrade-767000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:37:50.319903    9834 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:37:50.320032    9834 notify.go:220] Checking for updates...
	I0805 04:37:50.325886    9834 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:37:50.328815    9834 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:37:50.331945    9834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:37:50.333513    9834 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:37:50.336904    9834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:37:50.340117    9834 config.go:182] Loaded profile config "kubernetes-upgrade-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0805 04:37:50.340372    9834 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:37:50.344684    9834 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:37:50.351906    9834 start.go:297] selected driver: qemu2
	I0805 04:37:50.351912    9834 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:37:50.351961    9834 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:37:50.354120    9834 cni.go:84] Creating CNI manager for ""
	I0805 04:37:50.354137    9834 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:37:50.354157    9834 start.go:340] cluster config:
	{Name:kubernetes-upgrade-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-767000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:37:50.357405    9834 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:37:50.364853    9834 out.go:177] * Starting "kubernetes-upgrade-767000" primary control-plane node in "kubernetes-upgrade-767000" cluster
	I0805 04:37:50.368936    9834 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 04:37:50.368953    9834 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 04:37:50.368965    9834 cache.go:56] Caching tarball of preloaded images
	I0805 04:37:50.369036    9834 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:37:50.369041    9834 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 04:37:50.369101    9834 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/kubernetes-upgrade-767000/config.json ...
	I0805 04:37:50.369597    9834 start.go:360] acquireMachinesLock for kubernetes-upgrade-767000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:37:50.369623    9834 start.go:364] duration metric: took 20.125µs to acquireMachinesLock for "kubernetes-upgrade-767000"
	I0805 04:37:50.369630    9834 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:37:50.369636    9834 fix.go:54] fixHost starting: 
	I0805 04:37:50.369741    9834 fix.go:112] recreateIfNeeded on kubernetes-upgrade-767000: state=Stopped err=<nil>
	W0805 04:37:50.369749    9834 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:37:50.373800    9834 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-767000" ...
	I0805 04:37:50.381768    9834 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:37:50.381801    9834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ff:10:c9:b5:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2
	I0805 04:37:50.383755    9834 main.go:141] libmachine: STDOUT: 
	I0805 04:37:50.383774    9834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:37:50.383802    9834 fix.go:56] duration metric: took 14.167333ms for fixHost
	I0805 04:37:50.383806    9834 start.go:83] releasing machines lock for "kubernetes-upgrade-767000", held for 14.1795ms
	W0805 04:37:50.383813    9834 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:37:50.383840    9834 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:37:50.383844    9834 start.go:729] Will try again in 5 seconds ...
	I0805 04:37:55.386126    9834 start.go:360] acquireMachinesLock for kubernetes-upgrade-767000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:37:55.386638    9834 start.go:364] duration metric: took 361.334µs to acquireMachinesLock for "kubernetes-upgrade-767000"
	I0805 04:37:55.386813    9834 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:37:55.386835    9834 fix.go:54] fixHost starting: 
	I0805 04:37:55.387578    9834 fix.go:112] recreateIfNeeded on kubernetes-upgrade-767000: state=Stopped err=<nil>
	W0805 04:37:55.387606    9834 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:37:55.393122    9834 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-767000" ...
	I0805 04:37:55.401094    9834 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:37:55.401345    9834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ff:10:c9:b5:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubernetes-upgrade-767000/disk.qcow2
	I0805 04:37:55.411298    9834 main.go:141] libmachine: STDOUT: 
	I0805 04:37:55.411353    9834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:37:55.411436    9834 fix.go:56] duration metric: took 24.604541ms for fixHost
	I0805 04:37:55.411451    9834 start.go:83] releasing machines lock for "kubernetes-upgrade-767000", held for 24.792375ms
	W0805 04:37:55.411654    9834 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-767000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-767000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:37:55.419178    9834 out.go:177] 
	W0805 04:37:55.423153    9834 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:37:55.423183    9834 out.go:239] * 
	* 
	W0805 04:37:55.425221    9834 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:37:55.433153    9834 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-767000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-767000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-767000 version --output=json: exit status 1 (64.49ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-767000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-05 04:37:55.512953 -0700 PDT m=+948.086428293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-767000 -n kubernetes-upgrade-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-767000 -n kubernetes-upgrade-767000: exit status 7 (32.664333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-767000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-767000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-767000
--- FAIL: TestKubernetesUpgrade (18.74s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.45s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19377
- KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3902588519/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.45s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.22s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19377
- KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current958480038/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (585.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2082150871 start -p stopped-upgrade-528000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2082150871 start -p stopped-upgrade-528000 --memory=2200 --vm-driver=qemu2 : (51.122495916s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2082150871 -p stopped-upgrade-528000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2082150871 -p stopped-upgrade-528000 stop: (12.114929625s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-528000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-528000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.506962417s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-528000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-528000" primary control-plane node in "stopped-upgrade-528000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-528000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:38:59.864504    9870 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:38:59.864666    9870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:38:59.864671    9870 out.go:304] Setting ErrFile to fd 2...
	I0805 04:38:59.864674    9870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:38:59.864856    9870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:38:59.866172    9870 out.go:298] Setting JSON to false
	I0805 04:38:59.887102    9870 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5909,"bootTime":1722852030,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:38:59.887168    9870 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:38:59.891201    9870 out.go:177] * [stopped-upgrade-528000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:38:59.899123    9870 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:38:59.899170    9870 notify.go:220] Checking for updates...
	I0805 04:38:59.904616    9870 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:38:59.908076    9870 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:38:59.911113    9870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:38:59.914120    9870 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:38:59.917072    9870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:38:59.920356    9870 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:38:59.924047    9870 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 04:38:59.927107    9870 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:38:59.931082    9870 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:38:59.938034    9870 start.go:297] selected driver: qemu2
	I0805 04:38:59.938039    9870 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51465 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 04:38:59.938090    9870 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:38:59.940755    9870 cni.go:84] Creating CNI manager for ""
	I0805 04:38:59.940771    9870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:38:59.940794    9870 start.go:340] cluster config:
	{Name:stopped-upgrade-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51465 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 04:38:59.940846    9870 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:38:59.947976    9870 out.go:177] * Starting "stopped-upgrade-528000" primary control-plane node in "stopped-upgrade-528000" cluster
	I0805 04:38:59.952131    9870 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 04:38:59.952154    9870 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0805 04:38:59.952167    9870 cache.go:56] Caching tarball of preloaded images
	I0805 04:38:59.952242    9870 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:38:59.952250    9870 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0805 04:38:59.952311    9870 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/config.json ...
	I0805 04:38:59.952794    9870 start.go:360] acquireMachinesLock for stopped-upgrade-528000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:38:59.952829    9870 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "stopped-upgrade-528000"
	I0805 04:38:59.952837    9870 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:38:59.952842    9870 fix.go:54] fixHost starting: 
	I0805 04:38:59.952952    9870 fix.go:112] recreateIfNeeded on stopped-upgrade-528000: state=Stopped err=<nil>
	W0805 04:38:59.952960    9870 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:38:59.960064    9870 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-528000" ...
	I0805 04:38:59.964093    9870 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:38:59.964177    9870 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51431-:22,hostfwd=tcp::51432-:2376,hostname=stopped-upgrade-528000 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/disk.qcow2
	I0805 04:39:00.012319    9870 main.go:141] libmachine: STDOUT: 
	I0805 04:39:00.012344    9870 main.go:141] libmachine: STDERR: 
	I0805 04:39:00.012350    9870 main.go:141] libmachine: Waiting for VM to start (ssh -p 51431 docker@127.0.0.1)...
	I0805 04:39:20.395964    9870 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/config.json ...
	I0805 04:39:20.396388    9870 machine.go:94] provisionDockerMachine start ...
	I0805 04:39:20.396461    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:20.396722    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:20.396729    9870 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 04:39:20.476021    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 04:39:20.476051    9870 buildroot.go:166] provisioning hostname "stopped-upgrade-528000"
	I0805 04:39:20.476117    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:20.476295    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:20.476306    9870 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-528000 && echo "stopped-upgrade-528000" | sudo tee /etc/hostname
	I0805 04:39:20.556223    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-528000
	
	I0805 04:39:20.556281    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:20.556411    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:20.556421    9870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-528000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-528000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-528000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 04:39:20.629414    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 04:39:20.629428    9870 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19377-7130/.minikube CaCertPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19377-7130/.minikube}
	I0805 04:39:20.629435    9870 buildroot.go:174] setting up certificates
	I0805 04:39:20.629439    9870 provision.go:84] configureAuth start
	I0805 04:39:20.629448    9870 provision.go:143] copyHostCerts
	I0805 04:39:20.629551    9870 exec_runner.go:144] found /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.pem, removing ...
	I0805 04:39:20.629558    9870 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.pem
	I0805 04:39:20.629674    9870 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.pem (1078 bytes)
	I0805 04:39:20.629886    9870 exec_runner.go:144] found /Users/jenkins/minikube-integration/19377-7130/.minikube/cert.pem, removing ...
	I0805 04:39:20.629889    9870 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19377-7130/.minikube/cert.pem
	I0805 04:39:20.629949    9870 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19377-7130/.minikube/cert.pem (1123 bytes)
	I0805 04:39:20.630083    9870 exec_runner.go:144] found /Users/jenkins/minikube-integration/19377-7130/.minikube/key.pem, removing ...
	I0805 04:39:20.630086    9870 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19377-7130/.minikube/key.pem
	I0805 04:39:20.630137    9870 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19377-7130/.minikube/key.pem (1675 bytes)
	I0805 04:39:20.630245    9870 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-528000 san=[127.0.0.1 localhost minikube stopped-upgrade-528000]
	I0805 04:39:20.897935    9870 provision.go:177] copyRemoteCerts
	I0805 04:39:20.897988    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 04:39:20.897997    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	I0805 04:39:20.936660    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0805 04:39:20.944029    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0805 04:39:20.951164    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 04:39:20.957642    9870 provision.go:87] duration metric: took 328.195083ms to configureAuth
	I0805 04:39:20.957652    9870 buildroot.go:189] setting minikube options for container-runtime
	I0805 04:39:20.957770    9870 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:39:20.957807    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:20.957909    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:20.957916    9870 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 04:39:21.027437    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 04:39:21.027446    9870 buildroot.go:70] root file system type: tmpfs
	I0805 04:39:21.027491    9870 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 04:39:21.027525    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:21.027624    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:21.027659    9870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 04:39:21.099803    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 04:39:21.099860    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:21.099980    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:21.099988    9870 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 04:39:21.462230    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 04:39:21.462242    9870 machine.go:97] duration metric: took 1.065835583s to provisionDockerMachine
	I0805 04:39:21.462248    9870 start.go:293] postStartSetup for "stopped-upgrade-528000" (driver="qemu2")
	I0805 04:39:21.462255    9870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 04:39:21.462310    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 04:39:21.462319    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	I0805 04:39:21.501138    9870 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 04:39:21.502491    9870 info.go:137] Remote host: Buildroot 2021.02.12
	I0805 04:39:21.502498    9870 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19377-7130/.minikube/addons for local assets ...
	I0805 04:39:21.502608    9870 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19377-7130/.minikube/files for local assets ...
	I0805 04:39:21.502733    9870 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem -> 76242.pem in /etc/ssl/certs
	I0805 04:39:21.502867    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 04:39:21.505229    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem --> /etc/ssl/certs/76242.pem (1708 bytes)
	I0805 04:39:21.512327    9870 start.go:296] duration metric: took 50.073ms for postStartSetup
	I0805 04:39:21.512343    9870 fix.go:56] duration metric: took 21.559292792s for fixHost
	I0805 04:39:21.512378    9870 main.go:141] libmachine: Using SSH client type: native
	I0805 04:39:21.512485    9870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10113aa10] 0x10113d270 <nil>  [] 0s} localhost 51431 <nil> <nil>}
	I0805 04:39:21.512490    9870 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 04:39:21.581792    9870 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722857961.776913588
	
	I0805 04:39:21.581800    9870 fix.go:216] guest clock: 1722857961.776913588
	I0805 04:39:21.581804    9870 fix.go:229] Guest: 2024-08-05 04:39:21.776913588 -0700 PDT Remote: 2024-08-05 04:39:21.512344 -0700 PDT m=+21.679343001 (delta=264.569588ms)
	I0805 04:39:21.581814    9870 fix.go:200] guest clock delta is within tolerance: 264.569588ms
	I0805 04:39:21.581817    9870 start.go:83] releasing machines lock for "stopped-upgrade-528000", held for 21.628774042s
	I0805 04:39:21.581872    9870 ssh_runner.go:195] Run: cat /version.json
	I0805 04:39:21.581880    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	I0805 04:39:21.582579    9870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 04:39:21.582596    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	W0805 04:39:21.618094    9870 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0805 04:39:21.618144    9870 ssh_runner.go:195] Run: systemctl --version
	I0805 04:39:21.658992    9870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 04:39:21.660596    9870 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 04:39:21.660623    9870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0805 04:39:21.663939    9870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0805 04:39:21.668580    9870 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 04:39:21.668588    9870 start.go:495] detecting cgroup driver to use...
	I0805 04:39:21.668667    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 04:39:21.675752    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0805 04:39:21.679420    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 04:39:21.682305    9870 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 04:39:21.682328    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 04:39:21.685114    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 04:39:21.688232    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 04:39:21.691779    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 04:39:21.695071    9870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 04:39:21.698174    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 04:39:21.700982    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 04:39:21.704223    9870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 04:39:21.707715    9870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 04:39:21.710545    9870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 04:39:21.713048    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:21.791373    9870 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 04:39:21.798737    9870 start.go:495] detecting cgroup driver to use...
	I0805 04:39:21.798804    9870 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 04:39:21.806200    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 04:39:21.810872    9870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 04:39:21.822723    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 04:39:21.827176    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 04:39:21.831557    9870 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 04:39:21.888182    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 04:39:21.893778    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 04:39:21.899763    9870 ssh_runner.go:195] Run: which cri-dockerd
	I0805 04:39:21.901026    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 04:39:21.904092    9870 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 04:39:21.909208    9870 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 04:39:21.994031    9870 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 04:39:22.081352    9870 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 04:39:22.081421    9870 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 04:39:22.086637    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:22.173899    9870 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 04:39:23.338689    9870 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164760208s)
	I0805 04:39:23.338763    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 04:39:23.343464    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 04:39:23.348692    9870 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 04:39:23.428703    9870 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 04:39:23.511973    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:23.581347    9870 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 04:39:23.587694    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 04:39:23.592214    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:23.676331    9870 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 04:39:23.715138    9870 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 04:39:23.715219    9870 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 04:39:23.718638    9870 start.go:563] Will wait 60s for crictl version
	I0805 04:39:23.718696    9870 ssh_runner.go:195] Run: which crictl
	I0805 04:39:23.720095    9870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 04:39:23.734704    9870 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0805 04:39:23.734764    9870 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 04:39:23.750548    9870 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 04:39:23.770134    9870 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0805 04:39:23.770200    9870 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0805 04:39:23.771545    9870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 04:39:23.775606    9870 kubeadm.go:883] updating cluster {Name:stopped-upgrade-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51465 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0805 04:39:23.775649    9870 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 04:39:23.775688    9870 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 04:39:23.786106    9870 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 04:39:23.786114    9870 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 04:39:23.786162    9870 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 04:39:23.789057    9870 ssh_runner.go:195] Run: which lz4
	I0805 04:39:23.790401    9870 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0805 04:39:23.791600    9870 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 04:39:23.791617    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0805 04:39:24.720212    9870 docker.go:649] duration metric: took 929.83025ms to copy over tarball
	I0805 04:39:24.720268    9870 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 04:39:25.881342    9870 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.161049041s)
	I0805 04:39:25.881365    9870 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 04:39:25.897169    9870 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 04:39:25.900936    9870 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0805 04:39:25.906377    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:25.984192    9870 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 04:39:27.731354    9870 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.747122333s)
	I0805 04:39:27.731474    9870 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 04:39:27.743162    9870 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 04:39:27.743168    9870 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 04:39:27.743173    9870 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 04:39:27.747770    9870 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:27.749566    9870 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:39:27.751509    9870 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:27.751598    9870 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:39:27.754015    9870 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:39:27.754034    9870 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:39:27.756377    9870 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:39:27.756403    9870 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:39:27.756475    9870 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:39:27.758063    9870 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:39:27.758112    9870 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:39:27.759212    9870 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 04:39:27.759317    9870 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:39:27.759339    9870 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:39:27.760242    9870 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:39:27.761367    9870 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 04:39:28.166286    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:39:28.173899    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:39:28.178055    9870 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0805 04:39:28.178079    9870 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:39:28.178127    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0805 04:39:28.180058    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:39:28.188543    9870 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0805 04:39:28.188585    9870 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:39:28.188637    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0805 04:39:28.194289    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0805 04:39:28.198322    9870 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0805 04:39:28.198343    9870 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:39:28.198398    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 04:39:28.208953    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0805 04:39:28.214286    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:39:28.216040    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0805 04:39:28.220300    9870 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 04:39:28.220424    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:39:28.226921    9870 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0805 04:39:28.226946    9870 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:39:28.227003    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0805 04:39:28.235237    9870 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0805 04:39:28.235258    9870 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:39:28.235319    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0805 04:39:28.242387    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0805 04:39:28.249095    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 04:39:28.249216    9870 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0805 04:39:28.251122    9870 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0805 04:39:28.251135    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0805 04:39:28.287808    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0805 04:39:28.287964    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0805 04:39:28.290222    9870 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0805 04:39:28.290231    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0805 04:39:28.298015    9870 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0805 04:39:28.298038    9870 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0805 04:39:28.298095    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0805 04:39:28.303574    9870 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0805 04:39:28.303595    9870 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0805 04:39:28.303650    9870 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0805 04:39:28.345261    9870 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0805 04:39:28.345303    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 04:39:28.345304    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 04:39:28.345409    9870 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0805 04:39:28.345430    9870 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0805 04:39:28.346994    9870 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0805 04:39:28.347007    9870 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0805 04:39:28.347014    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0805 04:39:28.347020    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0805 04:39:28.360536    9870 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0805 04:39:28.360551    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0805 04:39:28.361473    9870 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 04:39:28.361571    9870 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:28.454735    9870 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0805 04:39:28.454737    9870 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0805 04:39:28.454768    9870 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:28.454824    9870 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:39:28.483952    9870 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 04:39:28.484073    9870 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0805 04:39:28.495896    9870 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0805 04:39:28.495931    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0805 04:39:28.567356    9870 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 04:39:28.567373    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0805 04:39:28.906093    9870 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 04:39:28.906117    9870 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0805 04:39:28.906124    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0805 04:39:29.060397    9870 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0805 04:39:29.060436    9870 cache_images.go:92] duration metric: took 1.317244959s to LoadCachedImages
	W0805 04:39:29.060483    9870 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0805 04:39:29.060488    9870 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0805 04:39:29.060536    9870 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-528000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 04:39:29.060598    9870 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 04:39:29.074315    9870 cni.go:84] Creating CNI manager for ""
	I0805 04:39:29.074328    9870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:39:29.074335    9870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 04:39:29.074344    9870 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-528000 NodeName:stopped-upgrade-528000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 04:39:29.074418    9870 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-528000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 04:39:29.074468    9870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0805 04:39:29.077252    9870 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 04:39:29.077281    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 04:39:29.080261    9870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0805 04:39:29.085220    9870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 04:39:29.090260    9870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0805 04:39:29.095472    9870 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0805 04:39:29.096660    9870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 04:39:29.100104    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:39:29.178024    9870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 04:39:29.187771    9870 certs.go:68] Setting up /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000 for IP: 10.0.2.15
	I0805 04:39:29.187780    9870 certs.go:194] generating shared ca certs ...
	I0805 04:39:29.187788    9870 certs.go:226] acquiring lock for ca certs: {Name:mk0fb10f8f63b8d852122cff16e2a9135337707a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:29.187964    9870 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.key
	I0805 04:39:29.188021    9870 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/proxy-client-ca.key
	I0805 04:39:29.188029    9870 certs.go:256] generating profile certs ...
	I0805 04:39:29.188105    9870 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/client.key
	I0805 04:39:29.188125    9870 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key.80e3a405
	I0805 04:39:29.188137    9870 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt.80e3a405 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0805 04:39:29.271695    9870 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt.80e3a405 ...
	I0805 04:39:29.271706    9870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt.80e3a405: {Name:mk376af323afd036739999d344555f5c14c23460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:29.272043    9870 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key.80e3a405 ...
	I0805 04:39:29.272047    9870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key.80e3a405: {Name:mk975eee9cf97d8164af586ccad65f113a3237f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:29.272185    9870 certs.go:381] copying /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt.80e3a405 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt
	I0805 04:39:29.272322    9870 certs.go:385] copying /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key.80e3a405 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key
	I0805 04:39:29.272468    9870 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/proxy-client.key
	I0805 04:39:29.272593    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/7624.pem (1338 bytes)
	W0805 04:39:29.272619    9870 certs.go:480] ignoring /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/7624_empty.pem, impossibly tiny 0 bytes
	I0805 04:39:29.272624    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 04:39:29.272649    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem (1078 bytes)
	I0805 04:39:29.272667    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem (1123 bytes)
	I0805 04:39:29.272691    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/key.pem (1675 bytes)
	I0805 04:39:29.272731    9870 certs.go:484] found cert: /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem (1708 bytes)
	I0805 04:39:29.273092    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 04:39:29.280242    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 04:39:29.287279    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 04:39:29.293607    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 04:39:29.300575    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 04:39:29.308146    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 04:39:29.315550    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 04:39:29.323067    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 04:39:29.329918    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/7624.pem --> /usr/share/ca-certificates/7624.pem (1338 bytes)
	I0805 04:39:29.336664    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/ssl/certs/76242.pem --> /usr/share/ca-certificates/76242.pem (1708 bytes)
	I0805 04:39:29.343803    9870 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 04:39:29.350926    9870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 04:39:29.356139    9870 ssh_runner.go:195] Run: openssl version
	I0805 04:39:29.358048    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76242.pem && ln -fs /usr/share/ca-certificates/76242.pem /etc/ssl/certs/76242.pem"
	I0805 04:39:29.361760    9870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76242.pem
	I0805 04:39:29.363141    9870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 11:23 /usr/share/ca-certificates/76242.pem
	I0805 04:39:29.363158    9870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76242.pem
	I0805 04:39:29.364897    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76242.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 04:39:29.368276    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 04:39:29.371603    9870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 04:39:29.373106    9870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 11:35 /usr/share/ca-certificates/minikubeCA.pem
	I0805 04:39:29.373126    9870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 04:39:29.374948    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 04:39:29.377750    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7624.pem && ln -fs /usr/share/ca-certificates/7624.pem /etc/ssl/certs/7624.pem"
	I0805 04:39:29.380957    9870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7624.pem
	I0805 04:39:29.382637    9870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 11:23 /usr/share/ca-certificates/7624.pem
	I0805 04:39:29.382661    9870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7624.pem
	I0805 04:39:29.384449    9870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7624.pem /etc/ssl/certs/51391683.0"
	I0805 04:39:29.388006    9870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 04:39:29.389566    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 04:39:29.391628    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 04:39:29.393554    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 04:39:29.395502    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 04:39:29.397293    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 04:39:29.399076    9870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 04:39:29.400805    9870 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51465 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 04:39:29.400864    9870 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 04:39:29.411453    9870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 04:39:29.414360    9870 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 04:39:29.414367    9870 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 04:39:29.414389    9870 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 04:39:29.417104    9870 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 04:39:29.417413    9870 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-528000" does not appear in /Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:39:29.417513    9870 kubeconfig.go:62] /Users/jenkins/minikube-integration/19377-7130/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-528000" cluster setting kubeconfig missing "stopped-upgrade-528000" context setting]
	I0805 04:39:29.417731    9870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/kubeconfig: {Name:mk9388f295704cbd2679ba0e5c0bb91678f79ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:39:29.418189    9870 kapi.go:59] client config for stopped-upgrade-528000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/client.key", CAFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1024d01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 04:39:29.418514    9870 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 04:39:29.421101    9870 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-528000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0805 04:39:29.421107    9870 kubeadm.go:1160] stopping kube-system containers ...
	I0805 04:39:29.421143    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 04:39:29.434709    9870 docker.go:483] Stopping containers: [0f824af6ef04 2ce668670762 d9ac8003079b c61b252b6587 eeef0a622ba7 c3de4560f438 9d1e43dbed7e fdcbbe9ff0d6 e320788f24f2]
	I0805 04:39:29.434776    9870 ssh_runner.go:195] Run: docker stop 0f824af6ef04 2ce668670762 d9ac8003079b c61b252b6587 eeef0a622ba7 c3de4560f438 9d1e43dbed7e fdcbbe9ff0d6 e320788f24f2
	I0805 04:39:29.445816    9870 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 04:39:29.451341    9870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 04:39:29.454066    9870 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 04:39:29.454071    9870 kubeadm.go:157] found existing configuration files:
	
	I0805 04:39:29.454093    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/admin.conf
	I0805 04:39:29.456699    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 04:39:29.456721    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 04:39:29.459713    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/kubelet.conf
	I0805 04:39:29.462225    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 04:39:29.462246    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 04:39:29.464730    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/controller-manager.conf
	I0805 04:39:29.467716    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 04:39:29.467741    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 04:39:29.470282    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/scheduler.conf
	I0805 04:39:29.472669    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 04:39:29.472690    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 04:39:29.475539    9870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 04:39:29.478198    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:39:29.500488    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:39:29.821566    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:39:29.949457    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:39:29.974777    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 04:39:30.000669    9870 api_server.go:52] waiting for apiserver process to appear ...
	I0805 04:39:30.000742    9870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:39:30.502981    9870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:39:31.002816    9870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:39:31.006970    9870 api_server.go:72] duration metric: took 1.006296583s to wait for apiserver process to appear ...
	I0805 04:39:31.006977    9870 api_server.go:88] waiting for apiserver healthz status ...
	I0805 04:39:31.006986    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:36.009172    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:36.009203    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:41.009587    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:41.009635    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:46.010164    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:46.010186    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:51.010754    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:51.010812    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:39:56.011667    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:39:56.011691    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:01.012996    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:01.013040    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:06.014366    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:06.014500    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:11.016610    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:11.016654    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:16.018851    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:16.018904    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:21.021353    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:21.021391    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:26.023726    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:26.023771    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:31.026129    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:31.026349    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:40:31.043480    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:40:31.043565    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:40:31.057171    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:40:31.057241    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:40:31.068518    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:40:31.068583    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:40:31.079052    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:40:31.079117    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:40:31.090207    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:40:31.090272    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:40:31.104816    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:40:31.104882    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:40:31.115085    9870 logs.go:276] 0 containers: []
	W0805 04:40:31.115097    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:40:31.115147    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:40:31.125374    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:40:31.125391    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:40:31.125397    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:40:31.165246    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:40:31.165257    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:40:31.209669    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:40:31.209679    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:40:31.222332    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:40:31.222343    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:40:31.234054    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:40:31.234063    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:40:31.248422    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:40:31.248433    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:40:31.259439    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:40:31.259449    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:40:31.284799    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:40:31.284807    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:40:31.289300    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:40:31.289321    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:40:31.390961    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:40:31.390973    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:40:31.405870    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:40:31.405881    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:40:31.417556    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:40:31.417569    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:40:31.428707    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:40:31.428718    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:40:31.456348    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:40:31.456360    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:40:31.474749    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:40:31.474760    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:40:31.494727    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:40:31.494740    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:40:31.510081    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:40:31.510094    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:40:34.024224    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:39.026464    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:39.026613    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:40:39.047925    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:40:39.048036    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:40:39.062005    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:40:39.062079    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:40:39.073586    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:40:39.073657    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:40:39.084217    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:40:39.084285    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:40:39.094885    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:40:39.094951    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:40:39.104909    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:40:39.104970    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:40:39.115224    9870 logs.go:276] 0 containers: []
	W0805 04:40:39.115234    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:40:39.115289    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:40:39.125810    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:40:39.125828    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:40:39.125833    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:40:39.140364    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:40:39.140375    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:40:39.151918    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:40:39.151929    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:40:39.172172    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:40:39.172182    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:40:39.183170    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:40:39.183181    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:40:39.187377    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:40:39.187383    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:40:39.205379    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:40:39.205389    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:40:39.217546    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:40:39.217556    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:40:39.256669    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:40:39.256688    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:40:39.271187    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:40:39.271201    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:40:39.291180    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:40:39.291193    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:40:39.302585    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:40:39.302595    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:40:39.326115    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:40:39.326122    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:40:39.362991    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:40:39.363002    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:40:39.384335    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:40:39.384348    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:40:39.398134    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:40:39.398144    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:40:39.409680    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:40:39.409692    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:40:41.950069    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:46.952559    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:46.952811    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:40:46.978461    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:40:46.978584    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:40:46.995789    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:40:46.995862    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:40:47.008241    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:40:47.008312    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:40:47.023648    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:40:47.023713    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:40:47.033925    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:40:47.033990    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:40:47.045277    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:40:47.045343    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:40:47.055761    9870 logs.go:276] 0 containers: []
	W0805 04:40:47.055773    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:40:47.055823    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:40:47.065847    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:40:47.065863    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:40:47.065870    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:40:47.100457    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:40:47.100471    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:40:47.114435    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:40:47.114449    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:40:47.130804    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:40:47.130819    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:40:47.148179    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:40:47.148189    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:40:47.161637    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:40:47.161647    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:40:47.201304    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:40:47.201313    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:40:47.213067    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:40:47.213088    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:40:47.231698    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:40:47.231709    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:40:47.235630    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:40:47.235640    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:40:47.272372    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:40:47.272390    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:40:47.294019    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:40:47.294029    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:40:47.306343    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:40:47.306358    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:40:47.320543    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:40:47.320556    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:40:47.331850    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:40:47.331861    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:40:47.344958    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:40:47.344967    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:40:47.356816    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:40:47.356826    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:40:49.882072    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:40:54.883294    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:40:54.883451    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:40:54.905651    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:40:54.905745    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:40:54.922881    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:40:54.922968    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:40:54.934457    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:40:54.934526    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:40:54.945314    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:40:54.945382    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:40:54.956638    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:40:54.956706    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:40:54.967850    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:40:54.967913    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:40:54.977943    9870 logs.go:276] 0 containers: []
	W0805 04:40:54.977954    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:40:54.978009    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:40:54.988453    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:40:54.988471    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:40:54.988478    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:40:55.025755    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:40:55.025765    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:40:55.036733    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:40:55.036744    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:40:55.049572    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:40:55.049582    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:40:55.060710    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:40:55.060721    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:40:55.064926    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:40:55.064931    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:40:55.100218    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:40:55.100229    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:40:55.114429    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:40:55.114439    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:40:55.136527    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:40:55.136538    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:40:55.148268    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:40:55.148283    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:40:55.160579    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:40:55.160590    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:40:55.177673    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:40:55.177683    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:40:55.189121    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:40:55.189130    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:40:55.225484    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:40:55.225492    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:40:55.240978    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:40:55.240990    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:40:55.254541    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:40:55.254550    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:40:55.274410    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:40:55.274422    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:40:57.801926    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:02.804688    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:02.804861    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:02.820783    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:02.820857    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:02.833913    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:02.833988    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:02.844724    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:02.844812    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:02.854914    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:02.854987    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:02.865190    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:02.865254    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:02.875436    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:02.875494    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:02.885741    9870 logs.go:276] 0 containers: []
	W0805 04:41:02.885753    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:02.885809    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:02.898553    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:02.898572    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:02.898578    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:02.919743    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:02.919755    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:02.937425    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:02.937436    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:02.951695    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:02.951705    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:02.965922    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:02.965933    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:02.981238    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:02.981250    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:02.997294    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:02.997309    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:03.011497    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:03.011507    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:03.015497    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:03.015512    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:03.049857    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:03.049868    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:03.089139    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:03.089158    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:03.102751    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:03.102765    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:03.114751    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:03.114763    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:03.125875    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:03.125890    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:03.138585    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:03.138597    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:03.178683    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:03.178701    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:03.203910    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:03.203918    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:05.720555    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:10.721896    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:10.722117    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:10.740495    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:10.740597    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:10.754430    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:10.754513    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:10.766476    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:10.766547    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:10.778277    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:10.778353    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:10.789291    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:10.789356    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:10.799758    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:10.799818    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:10.809974    9870 logs.go:276] 0 containers: []
	W0805 04:41:10.809987    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:10.810038    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:10.820582    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:10.820598    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:10.820604    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:10.857239    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:10.857249    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:10.878192    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:10.878203    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:10.891808    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:10.891818    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:10.905450    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:10.905461    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:10.909749    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:10.909756    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:10.930191    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:10.930205    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:10.946157    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:10.946168    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:10.960215    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:10.960225    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:10.981729    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:10.981738    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:10.997350    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:10.997362    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:11.010595    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:11.010604    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:11.021771    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:11.021782    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:11.058000    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:11.058013    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:11.095676    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:11.095689    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:11.113616    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:11.113630    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:11.139084    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:11.139100    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:13.653467    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:18.655964    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:18.656169    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:18.681124    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:18.681215    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:18.698281    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:18.698351    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:18.711296    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:18.711359    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:18.728083    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:18.728161    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:18.738251    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:18.738322    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:18.749302    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:18.749370    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:18.759262    9870 logs.go:276] 0 containers: []
	W0805 04:41:18.759274    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:18.759329    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:18.770254    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:18.770270    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:18.770278    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:18.775334    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:18.775342    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:18.810645    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:18.810659    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:18.824562    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:18.824577    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:18.836318    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:18.836329    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:18.855172    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:18.855182    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:18.893452    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:18.893460    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:18.907354    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:18.907368    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:18.926160    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:18.926173    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:18.942638    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:18.942649    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:18.967720    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:18.967728    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:18.983035    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:18.983045    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:19.021944    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:19.021955    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:19.036850    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:19.036884    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:19.047798    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:19.047811    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:19.065302    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:19.065312    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:19.080998    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:19.081009    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:21.604980    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:26.607744    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:26.608066    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:26.634267    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:26.634394    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:26.653451    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:26.653534    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:26.666870    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:26.666946    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:26.678621    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:26.678688    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:26.690579    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:26.690646    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:26.701467    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:26.701535    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:26.712244    9870 logs.go:276] 0 containers: []
	W0805 04:41:26.712254    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:26.712306    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:26.722938    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:26.722958    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:26.722963    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:26.727697    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:26.727706    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:26.767080    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:26.767091    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:26.780812    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:26.780822    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:26.792779    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:26.792791    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:26.807136    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:26.807147    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:26.823742    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:26.823752    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:26.861312    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:26.861320    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:26.896126    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:26.896136    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:26.908670    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:26.908685    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:26.920261    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:26.920273    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:26.932087    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:26.932100    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:26.946803    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:26.946816    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:26.960734    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:26.960744    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:26.974092    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:26.974102    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:26.995603    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:26.995614    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:27.013771    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:27.013781    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:29.539399    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:34.541852    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:34.542047    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:34.562718    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:34.562827    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:34.577035    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:34.577107    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:34.599523    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:34.599583    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:34.614794    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:34.614852    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:34.626578    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:34.626641    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:34.646885    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:34.646953    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:34.657186    9870 logs.go:276] 0 containers: []
	W0805 04:41:34.657198    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:34.657248    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:34.668190    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:34.668207    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:34.668212    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:34.682044    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:34.682055    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:34.693797    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:34.693817    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:34.705190    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:34.705202    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:34.726377    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:34.726387    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:34.740205    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:34.740215    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:34.763453    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:34.763460    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:34.802506    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:34.802517    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:34.817240    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:34.817253    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:34.837467    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:34.837485    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:34.871062    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:34.871076    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:34.883530    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:34.883542    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:34.921541    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:34.921553    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:34.935427    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:34.935442    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:34.946969    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:34.946980    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:34.986805    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:34.986818    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:34.991599    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:34.991606    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:37.506014    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:42.507518    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:42.507658    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:42.521233    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:42.521304    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:42.531626    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:42.531701    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:42.541809    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:42.541885    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:42.552070    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:42.552134    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:42.562470    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:42.562526    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:42.573540    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:42.573592    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:42.585240    9870 logs.go:276] 0 containers: []
	W0805 04:41:42.585252    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:42.585305    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:42.596249    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:42.596266    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:42.596273    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:42.600535    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:42.600548    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:42.639569    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:42.639580    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:42.655885    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:42.655897    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:42.669438    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:42.669452    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:42.706121    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:42.706131    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:42.720543    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:42.720553    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:42.733646    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:42.733655    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:42.745234    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:42.745244    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:42.762189    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:42.762198    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:42.788949    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:42.788958    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:42.827174    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:42.827183    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:42.842168    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:42.842178    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:42.855952    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:42.855962    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:42.867254    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:42.867263    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:42.888955    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:42.888972    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:42.903570    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:42.903582    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:45.419767    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:50.422141    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:50.422287    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:50.439193    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:50.439279    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:50.452567    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:50.452636    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:50.463997    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:50.464059    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:50.474436    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:50.474499    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:50.486079    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:50.486143    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:50.496983    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:50.497043    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:50.507051    9870 logs.go:276] 0 containers: []
	W0805 04:41:50.507062    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:50.507115    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:50.517590    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:50.517609    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:50.517615    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:50.532010    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:50.532020    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:50.543048    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:50.543059    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:50.555606    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:50.555617    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:50.559778    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:50.559785    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:50.582727    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:50.582734    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:50.620289    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:50.620300    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:50.631177    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:50.631189    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:50.652874    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:50.652886    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:50.668333    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:50.668346    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:50.681740    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:50.681753    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:50.693613    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:50.693624    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:50.710568    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:50.710579    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:50.724255    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:50.724266    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:50.762588    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:50.762596    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:41:50.798275    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:50.798286    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:50.813097    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:50.813109    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:53.326953    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:41:58.328242    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:41:58.328331    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:41:58.344525    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:41:58.344593    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:41:58.355027    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:41:58.355092    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:41:58.370892    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:41:58.370965    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:41:58.381624    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:41:58.381696    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:41:58.392516    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:41:58.392584    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:41:58.403339    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:41:58.403402    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:41:58.414089    9870 logs.go:276] 0 containers: []
	W0805 04:41:58.414102    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:41:58.414156    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:41:58.425458    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:41:58.425476    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:41:58.425482    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:41:58.436608    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:41:58.436619    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:41:58.455201    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:41:58.455211    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:41:58.468615    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:41:58.468626    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:41:58.490260    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:41:58.490271    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:41:58.507565    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:41:58.507577    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:41:58.522060    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:41:58.522075    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:41:58.533346    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:41:58.533361    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:41:58.557834    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:41:58.557843    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:41:58.572659    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:41:58.572672    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:41:58.584052    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:41:58.584062    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:41:58.621411    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:41:58.621422    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:41:58.635647    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:41:58.635661    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:41:58.676203    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:41:58.676215    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:41:58.688130    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:41:58.688141    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:41:58.700069    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:41:58.700083    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:41:58.704155    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:41:58.704162    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:01.241376    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:06.243277    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:06.243469    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:06.265250    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:06.265345    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:06.279489    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:06.279564    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:06.291829    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:06.291897    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:06.302690    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:06.302757    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:06.313531    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:06.313599    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:06.324412    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:06.324480    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:06.334172    9870 logs.go:276] 0 containers: []
	W0805 04:42:06.334183    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:06.334236    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:06.345389    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:06.345407    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:06.345412    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:06.359382    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:06.359396    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:06.371072    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:06.371085    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:06.384711    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:06.384722    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:06.398584    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:06.398599    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:06.414350    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:06.414362    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:06.437627    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:06.437638    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:06.452073    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:06.452086    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:06.456405    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:06.456412    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:06.478544    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:06.478556    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:06.495836    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:06.495849    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:06.509827    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:06.509839    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:06.546916    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:06.546926    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:06.567147    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:06.567158    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:06.609315    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:06.609326    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:06.623503    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:06.623514    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:06.635655    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:06.635666    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:09.176301    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:14.178720    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:14.179182    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:14.218336    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:14.218472    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:14.239936    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:14.240034    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:14.254577    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:14.254649    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:14.266972    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:14.267044    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:14.277990    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:14.278061    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:14.293221    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:14.293298    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:14.303329    9870 logs.go:276] 0 containers: []
	W0805 04:42:14.303339    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:14.303395    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:14.314104    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:14.314125    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:14.314131    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:14.318587    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:14.318594    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:14.333166    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:14.333177    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:14.345197    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:14.345209    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:14.359977    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:14.359988    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:14.371962    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:14.371974    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:14.389347    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:14.389357    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:14.403368    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:14.403378    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:14.416213    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:14.416223    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:14.428174    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:14.428189    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:14.442374    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:14.442386    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:14.458643    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:14.458653    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:14.486831    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:14.486842    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:14.499117    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:14.499133    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:14.524092    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:14.524103    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:14.561332    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:14.561341    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:14.596261    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:14.596273    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:17.136977    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:22.138495    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:22.138908    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:22.178286    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:22.178410    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:22.201016    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:22.201113    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:22.216335    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:22.216410    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:22.228767    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:22.228851    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:22.241370    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:22.241435    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:22.252083    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:22.252145    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:22.265802    9870 logs.go:276] 0 containers: []
	W0805 04:42:22.265814    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:22.265873    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:22.276333    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:22.276372    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:22.276379    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:22.314651    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:22.314663    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:22.328600    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:22.328610    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:22.350193    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:22.350204    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:22.364348    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:22.364359    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:22.376073    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:22.376084    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:22.388343    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:22.388354    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:22.425282    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:22.425294    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:22.429106    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:22.429112    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:22.475051    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:22.475061    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:22.493110    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:22.493121    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:22.505005    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:22.505019    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:22.518908    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:22.518919    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:22.530616    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:22.530628    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:22.545045    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:22.545055    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:22.563911    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:22.563921    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:22.578245    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:22.578255    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:25.103419    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:30.105846    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:30.106133    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:30.135874    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:30.135994    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:30.155814    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:30.155898    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:30.169331    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:30.169405    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:30.181093    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:30.181171    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:30.192633    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:30.192701    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:30.204535    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:30.204605    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:30.214829    9870 logs.go:276] 0 containers: []
	W0805 04:42:30.214842    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:30.214895    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:30.225277    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:30.225295    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:30.225301    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:30.261864    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:30.261871    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:30.297817    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:30.297827    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:30.319185    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:30.319197    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:30.333458    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:30.333472    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:30.371228    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:30.371240    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:30.384764    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:30.384778    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:30.398612    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:30.398625    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:30.423621    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:30.423638    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:30.435774    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:30.435786    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:30.464016    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:30.464029    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:30.482955    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:30.482966    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:30.496883    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:30.496896    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:30.513356    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:30.513368    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:30.519047    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:30.519055    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:30.535798    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:30.535812    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:30.547556    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:30.547569    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:33.061751    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:38.064488    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:38.064653    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:38.077906    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:38.077984    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:38.088839    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:38.088899    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:38.099929    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:38.099998    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:38.110612    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:38.110686    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:38.121234    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:38.121294    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:38.131586    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:38.131648    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:38.141609    9870 logs.go:276] 0 containers: []
	W0805 04:42:38.141620    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:38.141678    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:38.152189    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:38.152209    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:38.152214    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:38.166178    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:38.166188    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:38.177087    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:38.177098    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:38.191695    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:38.191704    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:38.229236    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:38.229249    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:38.243095    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:38.243105    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:38.256564    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:38.256573    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:38.274115    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:38.274126    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:38.285636    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:38.285645    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:38.297097    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:38.297107    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:38.308099    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:38.308112    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:38.330654    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:38.330661    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:38.335094    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:38.335101    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:38.371263    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:38.371276    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:38.385898    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:38.385912    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:38.407036    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:38.407048    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:38.420020    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:38.420032    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:40.961196    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:45.963717    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:45.963933    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:45.987179    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:45.987294    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:46.003467    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:46.003542    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:46.018284    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:46.018358    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:46.029393    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:46.029464    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:46.042277    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:46.042345    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:46.052753    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:46.052825    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:46.062712    9870 logs.go:276] 0 containers: []
	W0805 04:42:46.062725    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:46.062781    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:46.073168    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:46.073186    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:46.073192    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:46.084388    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:46.084401    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:46.088639    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:46.088647    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:46.101298    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:46.101308    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:46.118435    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:46.118445    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:46.129974    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:46.129986    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:46.151622    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:46.151632    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:46.165189    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:46.165199    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:46.176918    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:46.176928    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:46.194919    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:46.194932    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:46.207298    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:46.207309    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:46.246613    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:46.246623    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:46.286133    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:46.286142    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:46.299777    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:46.299787    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:46.324066    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:46.324073    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:46.364188    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:46.364202    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:46.381836    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:46.381845    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:48.898812    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:42:53.901089    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:42:53.901255    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:42:53.920153    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:42:53.920237    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:42:53.933527    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:42:53.933595    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:42:53.944883    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:42:53.944945    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:42:53.955605    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:42:53.955663    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:42:53.966400    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:42:53.966464    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:42:53.977033    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:42:53.977095    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:42:53.987398    9870 logs.go:276] 0 containers: []
	W0805 04:42:53.987409    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:42:53.987461    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:42:53.998069    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:42:53.998087    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:42:53.998092    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:42:54.020365    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:42:54.020373    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:42:54.024395    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:42:54.024402    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:42:54.037820    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:42:54.037832    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:42:54.074818    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:42:54.074833    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:42:54.092032    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:42:54.092045    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:42:54.115388    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:42:54.115398    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:42:54.129778    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:42:54.129788    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:42:54.140834    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:42:54.140845    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:42:54.178851    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:42:54.178860    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:42:54.199531    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:42:54.199541    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:42:54.220001    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:42:54.220010    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:42:54.231602    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:42:54.231613    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:42:54.249069    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:42:54.249079    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:42:54.260213    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:42:54.260226    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:42:54.299056    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:42:54.299067    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:42:54.316563    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:42:54.316575    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:42:56.828957    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:01.831491    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:01.831819    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:01.862479    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:43:01.862589    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:01.882259    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:43:01.882345    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:01.896357    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:43:01.896429    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:01.908128    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:43:01.908197    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:01.918696    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:43:01.918755    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:01.929343    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:43:01.929411    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:01.939559    9870 logs.go:276] 0 containers: []
	W0805 04:43:01.939570    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:01.939627    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:01.950725    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:43:01.950743    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:43:01.950749    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:43:01.964924    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:43:01.964935    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:01.980255    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:01.980266    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:02.016990    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:43:02.017004    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:43:02.030710    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:43:02.030718    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:43:02.067884    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:43:02.067897    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:43:02.079459    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:43:02.079471    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:43:02.090696    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:02.090706    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:02.095494    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:43:02.095502    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:43:02.109334    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:43:02.109349    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:43:02.123491    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:43:02.123505    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:43:02.137980    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:02.137991    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:02.176300    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:43:02.176314    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:43:02.198675    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:43:02.198689    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:43:02.215617    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:43:02.215628    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:43:02.226849    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:43:02.226861    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:43:02.238422    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:02.238434    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:04.764069    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:09.766452    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:09.766602    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:09.780171    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:43:09.780258    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:09.792056    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:43:09.792131    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:09.802799    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:43:09.802862    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:09.813595    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:43:09.813676    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:09.824198    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:43:09.824262    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:09.834849    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:43:09.835013    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:09.845368    9870 logs.go:276] 0 containers: []
	W0805 04:43:09.845378    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:09.845423    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:09.863000    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:43:09.863015    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:09.863021    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:09.900800    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:43:09.900816    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:43:09.924415    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:43:09.924425    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:43:09.935950    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:43:09.935959    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:43:09.947268    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:43:09.947283    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:43:09.960696    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:09.960706    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:09.964856    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:43:09.964862    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:43:09.978881    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:43:09.978895    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:43:10.016349    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:43:10.016358    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:43:10.030096    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:43:10.030105    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:43:10.044680    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:43:10.044690    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:43:10.056140    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:43:10.056155    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:43:10.073845    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:43:10.073854    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:43:10.085021    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:10.085031    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:10.106740    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:10.106748    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:10.143499    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:43:10.143513    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:43:10.164066    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:43:10.164077    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:12.677844    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:17.680112    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:17.680243    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:17.691594    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:43:17.691673    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:17.710037    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:43:17.710118    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:17.732256    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:43:17.732323    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:17.745239    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:43:17.745306    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:17.756333    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:43:17.756401    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:17.767335    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:43:17.767405    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:17.779105    9870 logs.go:276] 0 containers: []
	W0805 04:43:17.779117    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:17.779174    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:17.789725    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:43:17.789743    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:43:17.789750    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:43:17.807465    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:17.807476    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:17.830027    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:17.830035    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:17.834148    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:43:17.834157    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:43:17.845931    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:43:17.845942    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:43:17.857638    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:17.857649    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:17.892498    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:43:17.892510    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:43:17.930509    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:43:17.930523    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:43:17.944698    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:43:17.944707    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:43:17.958894    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:43:17.958904    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:43:17.980466    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:43:17.980481    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:43:17.998645    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:43:17.998655    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:43:18.016052    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:43:18.016063    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:18.028091    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:43:18.028101    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:43:18.042463    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:43:18.042473    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:43:18.053549    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:43:18.053561    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:43:18.067500    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:18.067510    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:20.604931    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:25.607232    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:25.607390    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:43:25.618851    9870 logs.go:276] 2 containers: [1980c300e1b1 d9ac8003079b]
	I0805 04:43:25.618920    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:43:25.633424    9870 logs.go:276] 2 containers: [e57e577b307e 0f824af6ef04]
	I0805 04:43:25.633485    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:43:25.643650    9870 logs.go:276] 1 containers: [94a487a63f31]
	I0805 04:43:25.643716    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:43:25.655913    9870 logs.go:276] 2 containers: [cf85c477525f c3de4560f438]
	I0805 04:43:25.655979    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:43:25.666007    9870 logs.go:276] 1 containers: [258557ad37eb]
	I0805 04:43:25.666064    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:43:25.681618    9870 logs.go:276] 2 containers: [4eb46236eafe c61b252b6587]
	I0805 04:43:25.681682    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:43:25.691819    9870 logs.go:276] 0 containers: []
	W0805 04:43:25.691829    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:43:25.691878    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:43:25.702021    9870 logs.go:276] 2 containers: [b309054692ae 49808911dbbb]
	I0805 04:43:25.702042    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:43:25.702047    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:43:25.706211    9870 logs.go:123] Gathering logs for etcd [0f824af6ef04] ...
	I0805 04:43:25.706221    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f824af6ef04"
	I0805 04:43:25.720609    9870 logs.go:123] Gathering logs for kube-scheduler [c3de4560f438] ...
	I0805 04:43:25.720619    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3de4560f438"
	I0805 04:43:25.742330    9870 logs.go:123] Gathering logs for kube-controller-manager [c61b252b6587] ...
	I0805 04:43:25.742341    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c61b252b6587"
	I0805 04:43:25.759671    9870 logs.go:123] Gathering logs for storage-provisioner [49808911dbbb] ...
	I0805 04:43:25.759681    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49808911dbbb"
	I0805 04:43:25.771002    9870 logs.go:123] Gathering logs for storage-provisioner [b309054692ae] ...
	I0805 04:43:25.771013    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b309054692ae"
	I0805 04:43:25.782180    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:43:25.782190    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:43:25.804611    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:43:25.804618    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:43:25.843724    9870 logs.go:123] Gathering logs for kube-apiserver [d9ac8003079b] ...
	I0805 04:43:25.843737    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9ac8003079b"
	I0805 04:43:25.882416    9870 logs.go:123] Gathering logs for coredns [94a487a63f31] ...
	I0805 04:43:25.882426    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a487a63f31"
	I0805 04:43:25.896801    9870 logs.go:123] Gathering logs for kube-scheduler [cf85c477525f] ...
	I0805 04:43:25.896813    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf85c477525f"
	I0805 04:43:25.910223    9870 logs.go:123] Gathering logs for kube-controller-manager [4eb46236eafe] ...
	I0805 04:43:25.910232    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb46236eafe"
	I0805 04:43:25.932294    9870 logs.go:123] Gathering logs for kube-apiserver [1980c300e1b1] ...
	I0805 04:43:25.932305    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1980c300e1b1"
	I0805 04:43:25.946561    9870 logs.go:123] Gathering logs for etcd [e57e577b307e] ...
	I0805 04:43:25.946570    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57e577b307e"
	I0805 04:43:25.963554    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:43:25.963564    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:43:25.976252    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:43:25.976262    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:43:26.014611    9870 logs.go:123] Gathering logs for kube-proxy [258557ad37eb] ...
	I0805 04:43:26.014624    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 258557ad37eb"
	I0805 04:43:28.529479    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:33.531745    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:33.531829    9870 kubeadm.go:597] duration metric: took 4m4.115085833s to restartPrimaryControlPlane
	W0805 04:43:33.531905    9870 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 04:43:33.531944    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 04:43:34.598765    9870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.066797875s)
	I0805 04:43:34.598836    9870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 04:43:34.603632    9870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 04:43:34.606629    9870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 04:43:34.609283    9870 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 04:43:34.609289    9870 kubeadm.go:157] found existing configuration files:
	
	I0805 04:43:34.609313    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/admin.conf
	I0805 04:43:34.611699    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 04:43:34.611723    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 04:43:34.614718    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/kubelet.conf
	I0805 04:43:34.617554    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 04:43:34.617577    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 04:43:34.620231    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/controller-manager.conf
	I0805 04:43:34.623253    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 04:43:34.623273    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 04:43:34.626170    9870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/scheduler.conf
	I0805 04:43:34.628640    9870 kubeadm.go:163] "https://control-plane.minikube.internal:51465" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51465 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 04:43:34.628659    9870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 04:43:34.631971    9870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 04:43:34.650461    9870 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 04:43:34.650530    9870 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 04:43:34.701823    9870 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 04:43:34.701903    9870 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 04:43:34.701973    9870 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 04:43:34.751892    9870 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 04:43:34.757145    9870 out.go:204]   - Generating certificates and keys ...
	I0805 04:43:34.757181    9870 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 04:43:34.757252    9870 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 04:43:34.757357    9870 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 04:43:34.757388    9870 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 04:43:34.757456    9870 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 04:43:34.757500    9870 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 04:43:34.757592    9870 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 04:43:34.757625    9870 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 04:43:34.757666    9870 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 04:43:34.757724    9870 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 04:43:34.757747    9870 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 04:43:34.757776    9870 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 04:43:34.843975    9870 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 04:43:34.960871    9870 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 04:43:35.022431    9870 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 04:43:35.144484    9870 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 04:43:35.173113    9870 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 04:43:35.173607    9870 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 04:43:35.173629    9870 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 04:43:35.261357    9870 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 04:43:35.264512    9870 out.go:204]   - Booting up control plane ...
	I0805 04:43:35.264654    9870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 04:43:35.265188    9870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 04:43:35.266060    9870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 04:43:35.269489    9870 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 04:43:35.270269    9870 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 04:43:39.772615    9870 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501764 seconds
	I0805 04:43:39.772693    9870 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 04:43:39.776943    9870 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 04:43:40.299325    9870 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 04:43:40.299597    9870 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-528000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 04:43:40.802911    9870 kubeadm.go:310] [bootstrap-token] Using token: k9o0ky.p7snj7ic9optnkq4
	I0805 04:43:40.804380    9870 out.go:204]   - Configuring RBAC rules ...
	I0805 04:43:40.804442    9870 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 04:43:40.805000    9870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 04:43:40.808592    9870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 04:43:40.809814    9870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 04:43:40.810756    9870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 04:43:40.811606    9870 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 04:43:40.814703    9870 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 04:43:40.983063    9870 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 04:43:41.207305    9870 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 04:43:41.207824    9870 kubeadm.go:310] 
	I0805 04:43:41.207853    9870 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 04:43:41.207856    9870 kubeadm.go:310] 
	I0805 04:43:41.207893    9870 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 04:43:41.207898    9870 kubeadm.go:310] 
	I0805 04:43:41.207909    9870 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 04:43:41.207936    9870 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 04:43:41.207970    9870 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 04:43:41.207976    9870 kubeadm.go:310] 
	I0805 04:43:41.208003    9870 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 04:43:41.208007    9870 kubeadm.go:310] 
	I0805 04:43:41.208031    9870 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 04:43:41.208034    9870 kubeadm.go:310] 
	I0805 04:43:41.208064    9870 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 04:43:41.208106    9870 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 04:43:41.208140    9870 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 04:43:41.208146    9870 kubeadm.go:310] 
	I0805 04:43:41.208183    9870 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 04:43:41.208220    9870 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 04:43:41.208224    9870 kubeadm.go:310] 
	I0805 04:43:41.208267    9870 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k9o0ky.p7snj7ic9optnkq4 \
	I0805 04:43:41.208323    9870 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:00ad0c80a9f7b4b654bf16d7fdaf8cb3872452317480a453e3b9036c421b1809 \
	I0805 04:43:41.208337    9870 kubeadm.go:310] 	--control-plane 
	I0805 04:43:41.208341    9870 kubeadm.go:310] 
	I0805 04:43:41.208385    9870 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 04:43:41.208389    9870 kubeadm.go:310] 
	I0805 04:43:41.208448    9870 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k9o0ky.p7snj7ic9optnkq4 \
	I0805 04:43:41.208504    9870 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:00ad0c80a9f7b4b654bf16d7fdaf8cb3872452317480a453e3b9036c421b1809 
	I0805 04:43:41.208644    9870 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 04:43:41.208654    9870 cni.go:84] Creating CNI manager for ""
	I0805 04:43:41.208666    9870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:43:41.211969    9870 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 04:43:41.218093    9870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 04:43:41.220931    9870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 04:43:41.225974    9870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 04:43:41.226015    9870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 04:43:41.226036    9870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-528000 minikube.k8s.io/updated_at=2024_08_05T04_43_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f minikube.k8s.io/name=stopped-upgrade-528000 minikube.k8s.io/primary=true
	I0805 04:43:41.266820    9870 kubeadm.go:1113] duration metric: took 40.838292ms to wait for elevateKubeSystemPrivileges
	I0805 04:43:41.266835    9870 ops.go:34] apiserver oom_adj: -16
	I0805 04:43:41.266840    9870 kubeadm.go:394] duration metric: took 4m11.863592666s to StartCluster
	I0805 04:43:41.266850    9870 settings.go:142] acquiring lock: {Name:mk4ccaf175b574f554efa4f63e0208c978f3f34f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:43:41.266940    9870 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:43:41.267374    9870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/kubeconfig: {Name:mk9388f295704cbd2679ba0e5c0bb91678f79ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:43:41.267587    9870 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:43:41.267642    9870 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 04:43:41.267682    9870 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-528000"
	I0805 04:43:41.267690    9870 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-528000"
	I0805 04:43:41.267696    9870 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:43:41.267705    9870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-528000"
	I0805 04:43:41.267694    9870 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-528000"
	W0805 04:43:41.267767    9870 addons.go:243] addon storage-provisioner should already be in state true
	I0805 04:43:41.267778    9870 host.go:66] Checking if "stopped-upgrade-528000" exists ...
	I0805 04:43:41.272048    9870 out.go:177] * Verifying Kubernetes components...
	I0805 04:43:41.272724    9870 kapi.go:59] client config for stopped-upgrade-528000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/stopped-upgrade-528000/client.key", CAFile:"/Users/jenkins/minikube-integration/19377-7130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1024d01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 04:43:41.276228    9870 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-528000"
	W0805 04:43:41.276233    9870 addons.go:243] addon default-storageclass should already be in state true
	I0805 04:43:41.276241    9870 host.go:66] Checking if "stopped-upgrade-528000" exists ...
	I0805 04:43:41.276819    9870 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 04:43:41.276825    9870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 04:43:41.276830    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	I0805 04:43:41.279995    9870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 04:43:41.287304    9870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 04:43:41.287332    9870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 04:43:41.287345    9870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 04:43:41.287353    9870 sshutil.go:53] new ssh client: &{IP:localhost Port:51431 SSHKeyPath:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/stopped-upgrade-528000/id_rsa Username:docker}
	I0805 04:43:41.373757    9870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 04:43:41.378614    9870 api_server.go:52] waiting for apiserver process to appear ...
	I0805 04:43:41.378656    9870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 04:43:41.382491    9870 api_server.go:72] duration metric: took 114.892292ms to wait for apiserver process to appear ...
	I0805 04:43:41.382499    9870 api_server.go:88] waiting for apiserver healthz status ...
	I0805 04:43:41.382506    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:41.392326    9870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 04:43:41.455709    9870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 04:43:46.384772    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:46.384859    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:51.385648    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:51.385672    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:43:56.386206    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:43:56.386243    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:01.387385    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:01.387421    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:06.388426    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:06.388477    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:11.389750    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:11.389787    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 04:44:11.746142    9870 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 04:44:11.749411    9870 out.go:177] * Enabled addons: storage-provisioner
	I0805 04:44:11.758092    9870 addons.go:510] duration metric: took 30.490183125s for enable addons: enabled=[storage-provisioner]
	I0805 04:44:16.391380    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:16.391409    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:21.391709    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:21.391744    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:26.393815    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:26.393848    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:31.396136    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:31.396181    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:36.398549    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:36.398647    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:41.401291    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:41.401472    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:44:41.416094    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:44:41.416179    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:44:41.428270    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:44:41.428334    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:44:41.438784    9870 logs.go:276] 2 containers: [0a228b1b51ad 945cf216c4ce]
	I0805 04:44:41.438852    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:44:41.448909    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:44:41.448968    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:44:41.459929    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:44:41.459997    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:44:41.470354    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:44:41.470413    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:44:41.480055    9870 logs.go:276] 0 containers: []
	W0805 04:44:41.480064    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:44:41.480113    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:44:41.493178    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:44:41.493192    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:44:41.493197    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:44:41.507941    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:44:41.507952    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:44:41.532328    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:44:41.532337    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:44:41.543673    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:44:41.543686    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:44:41.576346    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:44:41.576354    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:44:41.614763    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:44:41.614776    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:44:41.626399    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:44:41.626413    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:44:41.638174    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:44:41.638185    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:44:41.653204    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:44:41.653214    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:44:41.665109    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:44:41.665121    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:44:41.683107    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:44:41.683117    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:44:41.694702    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:44:41.694716    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:44:41.698840    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:44:41.698849    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:44:44.213411    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:49.216262    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:49.216717    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:44:49.257848    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:44:49.257979    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:44:49.277967    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:44:49.278082    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:44:49.293402    9870 logs.go:276] 2 containers: [0a228b1b51ad 945cf216c4ce]
	I0805 04:44:49.293477    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:44:49.306127    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:44:49.306191    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:44:49.317308    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:44:49.317373    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:44:49.327844    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:44:49.327905    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:44:49.337512    9870 logs.go:276] 0 containers: []
	W0805 04:44:49.337524    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:44:49.337582    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:44:49.348240    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:44:49.348255    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:44:49.348262    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:44:49.353011    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:44:49.353020    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:44:49.387414    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:44:49.387426    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:44:49.401935    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:44:49.401949    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:44:49.413709    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:44:49.413720    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:44:49.428683    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:44:49.428693    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:44:49.446216    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:44:49.446227    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:44:49.457742    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:44:49.457755    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:44:49.484906    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:44:49.484913    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:44:49.496432    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:44:49.496444    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:44:49.529411    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:44:49.529423    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:44:49.543228    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:44:49.543241    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:44:49.555268    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:44:49.555281    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:44:52.069094    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:44:57.071444    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:44:57.071887    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:44:57.113682    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:44:57.113815    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:44:57.135136    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:44:57.135237    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:44:57.149858    9870 logs.go:276] 2 containers: [0a228b1b51ad 945cf216c4ce]
	I0805 04:44:57.149923    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:44:57.162332    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:44:57.162393    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:44:57.173099    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:44:57.173163    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:44:57.183468    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:44:57.183523    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:44:57.194900    9870 logs.go:276] 0 containers: []
	W0805 04:44:57.194913    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:44:57.194968    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:44:57.207042    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:44:57.207057    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:44:57.207062    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:44:57.219099    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:44:57.219111    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:44:57.253353    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:44:57.253366    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:44:57.257990    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:44:57.258000    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:44:57.293928    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:44:57.293940    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:44:57.309867    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:44:57.309877    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:44:57.333550    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:44:57.333561    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:44:57.354140    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:44:57.354150    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:44:57.365985    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:44:57.365996    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:44:57.380269    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:44:57.380280    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:44:57.391821    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:44:57.391829    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:44:57.403602    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:44:57.403615    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:44:57.418223    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:44:57.418233    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:44:59.932404    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:45:04.935130    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:45:04.935564    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:45:04.974269    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:45:04.974413    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:45:04.995527    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:45:04.995617    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:45:05.010899    9870 logs.go:276] 2 containers: [0a228b1b51ad 945cf216c4ce]
	I0805 04:45:05.010970    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:45:05.023844    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:45:05.023913    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:45:05.034388    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:45:05.034449    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:45:05.045017    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:45:05.045080    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:45:05.056666    9870 logs.go:276] 0 containers: []
	W0805 04:45:05.056676    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:45:05.056728    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:45:05.067993    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:45:05.068009    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:45:05.068015    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:45:05.072186    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:45:05.072196    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:45:05.086596    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:45:05.086607    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:45:05.101769    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:45:05.101782    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:45:05.119915    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:45:05.119925    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:45:05.131393    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:45:05.131402    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:45:05.155768    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:45:05.155775    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:45:05.167704    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:45:05.167716    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:45:05.202026    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:45:05.202034    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:45:05.236305    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:45:05.236320    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:45:05.250801    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:45:05.250814    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:45:05.262781    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:45:05.262791    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:45:05.275243    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:45:05.275253    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:45:07.788758    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:45:12.791305    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:45:12.791726    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:45:12.831127    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:45:12.831251    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:45:12.852702    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:45:12.852809    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:45:12.876262    9870 logs.go:276] 2 containers: [0a228b1b51ad 945cf216c4ce]
	I0805 04:45:12.876329    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:45:12.887796    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:45:12.887870    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:45:12.898171    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:45:12.898237    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:45:12.908689    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:45:12.908760    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:45:12.919231    9870 logs.go:276] 0 containers: []
	W0805 04:45:12.919242    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:45:12.919296    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:45:12.934083    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:45:12.934101    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:45:12.934106    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:45:12.945678    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:45:12.945690    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:45:12.963643    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:45:12.963655    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:45:12.974916    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:45:12.974926    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:45:12.989714    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:45:12.989725    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:45:13.003572    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:45:13.003582    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:45:13.018384    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:45:13.018398    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:45:13.033114    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:45:13.033128    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:45:13.044228    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:45:13.044240    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:45:13.055284    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:45:13.055297    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:45:13.078385    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:45:13.078396    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:45:13.111125    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:45:13.111136    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:45:13.115105    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:45:13.115113    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:45:15.654758    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:45:20.657518    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:45:20.657848    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:45:20.688261    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:45:20.688390    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:45:20.705834    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:45:20.705966    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:45:20.719834    9870 logs.go:276] 2 containers: [0a228b1b51ad 945cf216c4ce]
	I0805 04:45:20.719896    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:45:20.731760    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:45:20.731823    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:45:20.742345    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:45:20.742417    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:45:20.752878    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:45:20.752939    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:45:20.763453    9870 logs.go:276] 0 containers: []
	W0805 04:45:20.763464    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:45:20.763511    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:45:20.773809    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:45:20.773824    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:45:20.773830    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:45:20.785534    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:45:20.785546    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:45:20.797667    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:45:20.797678    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:45:20.830700    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:45:20.830711    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:45:20.863865    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:45:20.863878    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:45:20.877997    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:45:20.878010    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:45:20.889717    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:45:20.889729    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:45:20.910931    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:45:20.910942    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:45:20.922613    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:45:20.922626    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:45:20.948158    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:45:20.948170    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:45:20.959406    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:45:20.959416    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:45:20.963712    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:45:20.963720    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:45:20.977575    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:45:20.977589    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:45:23.494144    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:45:28.497050    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:45:28.497498    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:45:28.534777    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:45:28.534945    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:45:28.558110    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:45:28.558194    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:45:28.573238    9870 logs.go:276] 2 containers: [0a228b1b51ad 945cf216c4ce]
	I0805 04:45:28.573302    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:45:28.587334    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:45:28.587400    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:45:28.597665    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:45:28.597720    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:45:28.608254    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:45:28.608327    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:45:28.618682    9870 logs.go:276] 0 containers: []
	W0805 04:45:28.618697    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:45:28.618756    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:45:28.629417    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:45:28.629432    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:45:28.629437    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:45:28.641441    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:45:28.641450    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:45:28.656375    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:45:28.656388    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:45:28.667816    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:45:28.667828    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:45:28.679454    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:45:28.679463    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:45:28.703814    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:45:28.703821    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:45:28.737544    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:45:28.737551    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:45:28.742138    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:45:28.742146    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:45:28.776587    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:45:28.776600    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:45:28.799941    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:45:28.799954    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:45:28.811198    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:45:28.811213    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:45:28.825559    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:45:28.825570    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:45:28.840546    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:45:28.840557    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:45:31.353612    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:45:36.356121    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:45:36.356299    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:45:36.381477    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:45:36.381576    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:45:36.400454    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:45:36.400539    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:45:36.414662    9870 logs.go:276] 2 containers: [0a228b1b51ad 945cf216c4ce]
	I0805 04:45:36.414723    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:45:36.427290    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:45:36.427351    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:45:36.437722    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:45:36.437781    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:45:36.448296    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:45:36.448360    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:45:36.458224    9870 logs.go:276] 0 containers: []
	W0805 04:45:36.458237    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:45:36.458283    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:45:36.468750    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:45:36.468764    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:45:36.468769    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:45:36.483351    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:45:36.483361    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:45:36.495025    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:45:36.495038    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:45:36.518803    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:45:36.518813    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:45:36.530433    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:45:36.530445    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:45:36.534980    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:45:36.534988    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:45:36.549464    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:45:36.549478    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:45:36.560601    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:45:36.560614    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:45:36.572209    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:45:36.572220    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:45:36.583532    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:45:36.583545    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:45:36.605753    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:45:36.605763    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:45:36.641000    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:45:36.641011    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:45:36.675990    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:45:36.676002    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:45:39.191921    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:45:44.192601    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:45:44.192849    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:45:44.215396    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:45:44.215502    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:45:44.230670    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:45:44.230732    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:45:44.243143    9870 logs.go:276] 2 containers: [0a228b1b51ad 945cf216c4ce]
	I0805 04:45:44.243218    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:45:44.253806    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:45:44.253862    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:45:44.264424    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:45:44.264482    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:45:44.274935    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:45:44.274999    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:45:44.284539    9870 logs.go:276] 0 containers: []
	W0805 04:45:44.284551    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:45:44.284605    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:45:44.294840    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:45:44.294855    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:45:44.294860    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:45:44.305891    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:45:44.305904    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:45:44.320104    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:45:44.320116    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:45:44.332142    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:45:44.332152    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:45:44.343415    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:45:44.343424    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:45:44.359848    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:45:44.359860    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:45:44.377470    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:45:44.377479    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:45:44.411789    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:45:44.411796    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:45:44.415895    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:45:44.415903    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:45:44.449053    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:45:44.449065    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:45:44.463689    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:45:44.463702    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:45:44.477567    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:45:44.477579    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:45:44.489874    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:45:44.489887    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:45:47.015823    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:45:52.018708    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:45:52.019169    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:45:52.058828    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:45:52.058959    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:45:52.080686    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:45:52.080785    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:45:52.100711    9870 logs.go:276] 2 containers: [0a228b1b51ad 945cf216c4ce]
	I0805 04:45:52.100783    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:45:52.112943    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:45:52.113010    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:45:52.123942    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:45:52.124009    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:45:52.134359    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:45:52.134424    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:45:52.146582    9870 logs.go:276] 0 containers: []
	W0805 04:45:52.146595    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:45:52.146650    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:45:52.156976    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:45:52.156991    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:45:52.156997    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:45:52.168454    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:45:52.168466    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:45:52.182718    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:45:52.182728    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:45:52.195057    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:45:52.195071    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:45:52.207160    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:45:52.207174    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:45:52.226734    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:45:52.226746    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:45:52.250334    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:45:52.250344    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:45:52.282870    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:45:52.282879    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:45:52.286768    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:45:52.286773    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:45:52.324655    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:45:52.324667    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:45:52.338994    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:45:52.339003    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:45:52.353669    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:45:52.353678    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:45:52.365552    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:45:52.365561    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:45:54.878189    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:45:59.879479    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:45:59.879842    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:45:59.917343    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:45:59.917473    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:45:59.938621    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:45:59.938726    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:45:59.953958    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:45:59.954036    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:45:59.966759    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:45:59.966828    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:45:59.977649    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:45:59.977705    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:45:59.988070    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:45:59.988134    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:46:00.005462    9870 logs.go:276] 0 containers: []
	W0805 04:46:00.005473    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:46:00.005528    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:46:00.016478    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:46:00.016497    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:46:00.016502    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:46:00.031616    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:46:00.031627    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:46:00.042819    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:46:00.042830    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:46:00.054698    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:46:00.054710    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:46:00.066852    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:46:00.066863    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:46:00.078866    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:46:00.078878    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:46:00.090109    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:46:00.090119    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:46:00.125344    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:46:00.125352    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:46:00.147647    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:46:00.147659    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:46:00.162027    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:46:00.162036    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:46:00.173615    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:46:00.173625    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:46:00.197156    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:46:00.197162    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:46:00.201010    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:46:00.201018    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:46:00.237069    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:46:00.237083    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:46:00.250783    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:46:00.250796    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:46:02.770179    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:46:07.772440    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:46:07.772599    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:46:07.788023    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:46:07.788094    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:46:07.802124    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:46:07.802188    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:46:07.813626    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:46:07.813691    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:46:07.825386    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:46:07.825446    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:46:07.835233    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:46:07.835289    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:46:07.845744    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:46:07.845805    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:46:07.855345    9870 logs.go:276] 0 containers: []
	W0805 04:46:07.855356    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:46:07.855405    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:46:07.865898    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:46:07.865913    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:46:07.865918    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:46:07.877669    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:46:07.877680    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:46:07.896142    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:46:07.896154    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:46:07.920989    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:46:07.920996    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:46:07.955466    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:46:07.955476    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:46:07.970152    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:46:07.970163    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:46:07.982325    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:46:07.982342    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:46:07.996664    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:46:07.996673    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:46:08.008168    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:46:08.008177    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:46:08.012186    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:46:08.012192    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:46:08.023162    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:46:08.023177    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:46:08.034716    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:46:08.034727    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:46:08.073131    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:46:08.073140    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:46:08.087713    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:46:08.087722    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:46:08.098919    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:46:08.098929    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:46:10.618797    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:46:15.621681    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:46:15.622103    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:46:15.659235    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:46:15.659361    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:46:15.680467    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:46:15.680550    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:46:15.695010    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:46:15.695084    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:46:15.709942    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:46:15.710005    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:46:15.720319    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:46:15.720384    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:46:15.731317    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:46:15.731369    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:46:15.742153    9870 logs.go:276] 0 containers: []
	W0805 04:46:15.742164    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:46:15.742215    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:46:15.753296    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:46:15.753314    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:46:15.753319    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:46:15.765231    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:46:15.765245    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:46:15.788957    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:46:15.788963    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:46:15.809020    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:46:15.809032    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:46:15.831214    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:46:15.831226    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:46:15.842961    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:46:15.842973    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:46:15.854407    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:46:15.854417    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:46:15.866427    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:46:15.866437    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:46:15.899302    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:46:15.899311    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:46:15.935320    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:46:15.935329    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:46:15.950392    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:46:15.950404    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:46:15.969187    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:46:15.969200    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:46:15.980542    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:46:15.980553    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:46:15.991776    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:46:15.991787    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:46:15.996049    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:46:15.996055    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:46:18.509260    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:46:23.511655    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:46:23.512125    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:46:23.551784    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:46:23.551904    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:46:23.573693    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:46:23.573782    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:46:23.588898    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:46:23.588975    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:46:23.600831    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:46:23.600889    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:46:23.611391    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:46:23.611451    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:46:23.622394    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:46:23.622457    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:46:23.632198    9870 logs.go:276] 0 containers: []
	W0805 04:46:23.632208    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:46:23.632256    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:46:23.642987    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:46:23.643007    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:46:23.643012    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:46:23.655275    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:46:23.655289    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:46:23.669889    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:46:23.669902    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:46:23.685291    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:46:23.685304    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:46:23.697081    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:46:23.697090    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:46:23.716146    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:46:23.716158    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:46:23.754833    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:46:23.754848    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:46:23.780390    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:46:23.780397    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:46:23.791457    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:46:23.791468    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:46:23.826696    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:46:23.826709    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:46:23.839071    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:46:23.839082    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:46:23.851181    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:46:23.851194    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:46:23.863804    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:46:23.863817    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:46:23.868164    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:46:23.868172    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:46:23.879805    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:46:23.879819    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:46:26.400042    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:46:31.401195    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:46:31.401245    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:46:31.412368    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:46:31.412429    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:46:31.429875    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:46:31.429922    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:46:31.440950    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:46:31.440990    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:46:31.452038    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:46:31.452082    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:46:31.463529    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:46:31.463590    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:46:31.477602    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:46:31.477674    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:46:31.494869    9870 logs.go:276] 0 containers: []
	W0805 04:46:31.494882    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:46:31.494937    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:46:31.506976    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:46:31.506999    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:46:31.507005    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:46:31.543561    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:46:31.543580    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:46:31.559604    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:46:31.559619    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:46:31.575783    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:46:31.575801    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:46:31.595156    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:46:31.595181    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:46:31.614380    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:46:31.614391    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:46:31.625915    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:46:31.625929    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:46:31.637615    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:46:31.637626    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:46:31.652125    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:46:31.652136    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:46:31.666495    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:46:31.666506    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:46:31.679876    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:46:31.679888    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:46:31.692073    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:46:31.692085    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:46:31.696553    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:46:31.696559    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:46:31.731752    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:46:31.731764    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:46:31.743960    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:46:31.743970    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:46:34.269331    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:46:39.272063    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:46:39.272170    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:46:39.283887    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:46:39.283950    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:46:39.295057    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:46:39.295124    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:46:39.306601    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:46:39.306665    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:46:39.317436    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:46:39.317499    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:46:39.328465    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:46:39.328532    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:46:39.339055    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:46:39.339122    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:46:39.349878    9870 logs.go:276] 0 containers: []
	W0805 04:46:39.349887    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:46:39.349936    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:46:39.360552    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:46:39.360568    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:46:39.360575    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:46:39.365112    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:46:39.365121    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:46:39.380162    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:46:39.380175    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:46:39.395916    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:46:39.395925    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:46:39.419790    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:46:39.419799    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:46:39.431428    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:46:39.431438    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:46:39.445500    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:46:39.445511    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:46:39.457140    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:46:39.457150    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:46:39.468977    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:46:39.468988    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:46:39.486539    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:46:39.486548    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:46:39.506099    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:46:39.506110    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:46:39.538855    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:46:39.538864    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:46:39.550665    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:46:39.550678    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:46:39.593477    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:46:39.593488    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:46:39.606670    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:46:39.606681    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:46:42.122922    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:46:47.125311    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:46:47.125657    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:46:47.163522    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:46:47.163650    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:46:47.182120    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:46:47.182224    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:46:47.202159    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:46:47.202238    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:46:47.214248    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:46:47.214313    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:46:47.224331    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:46:47.224393    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:46:47.241304    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:46:47.241368    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:46:47.254376    9870 logs.go:276] 0 containers: []
	W0805 04:46:47.254386    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:46:47.254437    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:46:47.265169    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:46:47.265191    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:46:47.265196    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:46:47.269630    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:46:47.269639    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:46:47.307712    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:46:47.307725    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:46:47.319819    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:46:47.319832    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:46:47.334351    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:46:47.334364    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:46:47.351281    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:46:47.351293    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:46:47.365238    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:46:47.365250    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:46:47.376602    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:46:47.376616    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:46:47.388313    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:46:47.388323    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:46:47.400002    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:46:47.400011    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:46:47.425108    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:46:47.425114    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:46:47.459209    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:46:47.459215    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:46:47.470433    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:46:47.470442    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:46:47.484209    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:46:47.484221    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:46:47.495654    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:46:47.495664    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:46:50.009548    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:46:55.011947    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:46:55.012016    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:46:55.024513    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:46:55.024571    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:46:55.036847    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:46:55.036897    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:46:55.047534    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:46:55.047592    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:46:55.062217    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:46:55.062271    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:46:55.075296    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:46:55.075344    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:46:55.086615    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:46:55.086666    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:46:55.097206    9870 logs.go:276] 0 containers: []
	W0805 04:46:55.097219    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:46:55.097273    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:46:55.110394    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:46:55.110411    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:46:55.110418    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:46:55.123384    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:46:55.123395    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:46:55.135406    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:46:55.135419    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:46:55.153572    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:46:55.153585    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:46:55.176548    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:46:55.176559    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:46:55.193046    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:46:55.193058    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:46:55.206973    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:46:55.206984    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:46:55.218932    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:46:55.218946    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:46:55.235330    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:46:55.235345    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:46:55.263264    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:46:55.263285    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:46:55.276745    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:46:55.276757    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:46:55.311518    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:46:55.311538    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:46:55.316231    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:46:55.316242    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:46:55.353602    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:46:55.353612    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:46:55.365548    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:46:55.365559    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:46:57.879920    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:47:02.882647    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:47:02.882750    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:47:02.895369    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:47:02.895434    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:47:02.907541    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:47:02.907609    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:47:02.920078    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:47:02.920146    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:47:02.932426    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:47:02.932496    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:47:02.944650    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:47:02.944716    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:47:02.956677    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:47:02.956741    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:47:02.968786    9870 logs.go:276] 0 containers: []
	W0805 04:47:02.968798    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:47:02.968852    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:47:02.981511    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:47:02.981533    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:47:02.981540    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:47:02.994610    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:47:02.994623    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:47:03.009686    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:47:03.009696    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:47:03.020986    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:47:03.021001    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:47:03.032879    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:47:03.032890    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:47:03.066376    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:47:03.066386    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:47:03.071260    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:47:03.071270    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:47:03.085511    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:47:03.085521    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:47:03.100033    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:47:03.100047    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:47:03.112135    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:47:03.112145    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:47:03.129507    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:47:03.129516    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:47:03.154759    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:47:03.154766    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:47:03.189965    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:47:03.189975    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:47:03.201762    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:47:03.201772    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:47:03.213561    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:47:03.213573    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:47:05.727232    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:47:10.729508    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:47:10.729890    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:47:10.766016    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:47:10.766131    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:47:10.786530    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:47:10.786610    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:47:10.800860    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:47:10.800931    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:47:10.813338    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:47:10.813400    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:47:10.823922    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:47:10.823981    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:47:10.846234    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:47:10.846292    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:47:10.856984    9870 logs.go:276] 0 containers: []
	W0805 04:47:10.856996    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:47:10.857048    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:47:10.867433    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:47:10.867454    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:47:10.867461    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:47:10.881760    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:47:10.881772    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:47:10.893812    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:47:10.893824    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:47:10.905146    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:47:10.905159    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:47:10.919766    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:47:10.919775    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:47:10.939725    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:47:10.939734    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:47:10.973992    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:47:10.974004    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:47:10.985603    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:47:10.985616    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:47:10.989776    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:47:10.989785    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:47:11.004200    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:47:11.004211    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:47:11.015779    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:47:11.015791    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:47:11.027526    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:47:11.027538    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:47:11.050966    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:47:11.050975    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:47:11.083584    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:47:11.083595    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:47:11.095018    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:47:11.095030    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:47:13.609207    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:47:18.610527    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:47:18.610607    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:47:18.625867    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:47:18.625939    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:47:18.637452    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:47:18.637522    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:47:18.649234    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:47:18.649320    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:47:18.674068    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:47:18.674150    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:47:18.701919    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:47:18.701980    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:47:18.713193    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:47:18.713244    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:47:18.724139    9870 logs.go:276] 0 containers: []
	W0805 04:47:18.724153    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:47:18.724210    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:47:18.736907    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:47:18.736921    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:47:18.736926    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:47:18.748590    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:47:18.748603    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:47:18.764832    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:47:18.764842    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:47:18.780579    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:47:18.780588    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:47:18.804437    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:47:18.804451    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:47:18.819463    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:47:18.819475    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:47:18.838103    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:47:18.838116    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:47:18.873368    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:47:18.873378    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:47:18.877956    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:47:18.877964    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:47:18.914340    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:47:18.914354    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:47:18.931111    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:47:18.931125    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:47:18.944031    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:47:18.944045    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:47:18.960201    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:47:18.960212    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:47:18.972548    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:47:18.972556    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:47:18.983989    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:47:18.984002    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:47:21.499098    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:47:26.499941    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:47:26.500301    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:47:26.542831    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:47:26.542968    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:47:26.565233    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:47:26.565332    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:47:26.582916    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:47:26.583007    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:47:26.602346    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:47:26.602413    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:47:26.613060    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:47:26.613116    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:47:26.624043    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:47:26.624105    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:47:26.635231    9870 logs.go:276] 0 containers: []
	W0805 04:47:26.635243    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:47:26.635296    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:47:26.647311    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:47:26.647329    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:47:26.647333    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:47:26.682954    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:47:26.682967    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:47:26.695384    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:47:26.695394    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:47:26.706887    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:47:26.706900    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:47:26.718474    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:47:26.718483    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:47:26.734908    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:47:26.734920    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:47:26.751570    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:47:26.751582    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:47:26.768768    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:47:26.768780    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:47:26.780220    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:47:26.780232    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:47:26.791643    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:47:26.791656    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:47:26.814511    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:47:26.814518    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:47:26.846287    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:47:26.846294    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:47:26.850211    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:47:26.850219    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:47:26.868628    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:47:26.868642    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:47:26.880397    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:47:26.880410    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:47:29.393421    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:47:34.396342    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:47:34.396718    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 04:47:34.436287    9870 logs.go:276] 1 containers: [ca13ce401cce]
	I0805 04:47:34.436414    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 04:47:34.457976    9870 logs.go:276] 1 containers: [420f4f3bde31]
	I0805 04:47:34.458080    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 04:47:34.473534    9870 logs.go:276] 4 containers: [687dd8293b1e 5cd6e712c877 0a228b1b51ad 945cf216c4ce]
	I0805 04:47:34.473603    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 04:47:34.493545    9870 logs.go:276] 1 containers: [bb2646c661cb]
	I0805 04:47:34.493607    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 04:47:34.504577    9870 logs.go:276] 1 containers: [e406d81bfae0]
	I0805 04:47:34.504633    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 04:47:34.517379    9870 logs.go:276] 1 containers: [7b932b7b0f4a]
	I0805 04:47:34.517437    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 04:47:34.529178    9870 logs.go:276] 0 containers: []
	W0805 04:47:34.529190    9870 logs.go:278] No container was found matching "kindnet"
	I0805 04:47:34.529250    9870 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 04:47:34.539906    9870 logs.go:276] 1 containers: [ba18769da050]
	I0805 04:47:34.539922    9870 logs.go:123] Gathering logs for describe nodes ...
	I0805 04:47:34.539927    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 04:47:34.575375    9870 logs.go:123] Gathering logs for coredns [687dd8293b1e] ...
	I0805 04:47:34.575385    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 687dd8293b1e"
	I0805 04:47:34.587029    9870 logs.go:123] Gathering logs for coredns [5cd6e712c877] ...
	I0805 04:47:34.587042    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cd6e712c877"
	I0805 04:47:34.598739    9870 logs.go:123] Gathering logs for storage-provisioner [ba18769da050] ...
	I0805 04:47:34.598752    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba18769da050"
	I0805 04:47:34.610623    9870 logs.go:123] Gathering logs for kubelet ...
	I0805 04:47:34.610636    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 04:47:34.643603    9870 logs.go:123] Gathering logs for kube-scheduler [bb2646c661cb] ...
	I0805 04:47:34.643610    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2646c661cb"
	I0805 04:47:34.657905    9870 logs.go:123] Gathering logs for coredns [945cf216c4ce] ...
	I0805 04:47:34.657913    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945cf216c4ce"
	I0805 04:47:34.669838    9870 logs.go:123] Gathering logs for kube-proxy [e406d81bfae0] ...
	I0805 04:47:34.669849    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e406d81bfae0"
	I0805 04:47:34.682014    9870 logs.go:123] Gathering logs for container status ...
	I0805 04:47:34.682023    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 04:47:34.693344    9870 logs.go:123] Gathering logs for etcd [420f4f3bde31] ...
	I0805 04:47:34.693358    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 420f4f3bde31"
	I0805 04:47:34.707757    9870 logs.go:123] Gathering logs for kube-apiserver [ca13ce401cce] ...
	I0805 04:47:34.707768    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca13ce401cce"
	I0805 04:47:34.722610    9870 logs.go:123] Gathering logs for coredns [0a228b1b51ad] ...
	I0805 04:47:34.722622    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a228b1b51ad"
	I0805 04:47:34.734632    9870 logs.go:123] Gathering logs for kube-controller-manager [7b932b7b0f4a] ...
	I0805 04:47:34.734642    9870 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b932b7b0f4a"
	I0805 04:47:34.752129    9870 logs.go:123] Gathering logs for Docker ...
	I0805 04:47:34.752138    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 04:47:34.776793    9870 logs.go:123] Gathering logs for dmesg ...
	I0805 04:47:34.776801    9870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 04:47:37.283359    9870 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 04:47:42.286284    9870 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 04:47:42.293153    9870 out.go:177] 
	W0805 04:47:42.298505    9870 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0805 04:47:42.298534    9870 out.go:239] * 
	* 
	W0805 04:47:42.300515    9870 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:47:42.309920    9870 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-528000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (585.85s)

                                                
                                    
x
+
TestPause/serial/Start (9.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-908000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-908000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.803924959s)

                                                
                                                
-- stdout --
	* [pause-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-908000" primary control-plane node in "pause-908000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-908000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-908000 -n pause-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-908000 -n pause-908000: exit status 7 (44.387958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 : exit status 80 (9.697085541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-839000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-839000" primary control-plane node in "NoKubernetes-839000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-839000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-839000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000: exit status 7 (67.36375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 : exit status 80 (5.23252s)

                                                
                                                
-- stdout --
	* [NoKubernetes-839000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-839000
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-839000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000: exit status 7 (31.762584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 : exit status 80 (5.251078709s)

                                                
                                                
-- stdout --
	* [NoKubernetes-839000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-839000
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-839000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000: exit status 7 (65.526875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 : exit status 80 (5.252026209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-839000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-839000
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-839000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000: exit status 7 (52.012125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.123506291s)

                                                
                                                
-- stdout --
	* [auto-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-816000" primary control-plane node in "auto-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:45:42.989669   10066 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:45:42.989804   10066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:45:42.989807   10066 out.go:304] Setting ErrFile to fd 2...
	I0805 04:45:42.989809   10066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:45:42.989959   10066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:45:42.991033   10066 out.go:298] Setting JSON to false
	I0805 04:45:43.007567   10066 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6312,"bootTime":1722852030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:45:43.007679   10066 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:45:43.012502   10066 out.go:177] * [auto-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:45:43.018650   10066 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:45:43.018708   10066 notify.go:220] Checking for updates...
	I0805 04:45:43.026599   10066 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:45:43.029595   10066 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:45:43.032696   10066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:45:43.035688   10066 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:45:43.038639   10066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:45:43.041937   10066 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:45:43.041998   10066 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:45:43.042043   10066 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:45:43.045518   10066 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:45:43.052661   10066 start.go:297] selected driver: qemu2
	I0805 04:45:43.052668   10066 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:45:43.052675   10066 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:45:43.055132   10066 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:45:43.057604   10066 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:45:43.060752   10066 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:45:43.060815   10066 cni.go:84] Creating CNI manager for ""
	I0805 04:45:43.060825   10066 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:45:43.060830   10066 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:45:43.060867   10066 start.go:340] cluster config:
	{Name:auto-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:45:43.064770   10066 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:45:43.070630   10066 out.go:177] * Starting "auto-816000" primary control-plane node in "auto-816000" cluster
	I0805 04:45:43.074657   10066 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:45:43.074688   10066 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:45:43.074702   10066 cache.go:56] Caching tarball of preloaded images
	I0805 04:45:43.074784   10066 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:45:43.074790   10066 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:45:43.074861   10066 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/auto-816000/config.json ...
	I0805 04:45:43.074872   10066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/auto-816000/config.json: {Name:mkd981c5e38606e4d20ba4b761b01631cc094657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:45:43.075117   10066 start.go:360] acquireMachinesLock for auto-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:45:43.075147   10066 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "auto-816000"
	I0805 04:45:43.075157   10066 start.go:93] Provisioning new machine with config: &{Name:auto-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:45:43.075207   10066 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:45:43.083598   10066 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:45:43.098733   10066 start.go:159] libmachine.API.Create for "auto-816000" (driver="qemu2")
	I0805 04:45:43.098769   10066 client.go:168] LocalClient.Create starting
	I0805 04:45:43.098837   10066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:45:43.098868   10066 main.go:141] libmachine: Decoding PEM data...
	I0805 04:45:43.098876   10066 main.go:141] libmachine: Parsing certificate...
	I0805 04:45:43.098913   10066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:45:43.098935   10066 main.go:141] libmachine: Decoding PEM data...
	I0805 04:45:43.098944   10066 main.go:141] libmachine: Parsing certificate...
	I0805 04:45:43.099298   10066 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:45:43.247218   10066 main.go:141] libmachine: Creating SSH key...
	I0805 04:45:43.616262   10066 main.go:141] libmachine: Creating Disk image...
	I0805 04:45:43.616275   10066 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:45:43.616499   10066 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2
	I0805 04:45:43.626300   10066 main.go:141] libmachine: STDOUT: 
	I0805 04:45:43.626322   10066 main.go:141] libmachine: STDERR: 
	I0805 04:45:43.626373   10066 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2 +20000M
	I0805 04:45:43.634798   10066 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:45:43.634822   10066 main.go:141] libmachine: STDERR: 
	I0805 04:45:43.634839   10066 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2
	I0805 04:45:43.634843   10066 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:45:43.634854   10066 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:45:43.634888   10066 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:63:8e:eb:41:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2
	I0805 04:45:43.636665   10066 main.go:141] libmachine: STDOUT: 
	I0805 04:45:43.636679   10066 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:45:43.636698   10066 client.go:171] duration metric: took 537.918791ms to LocalClient.Create
	I0805 04:45:45.638920   10066 start.go:128] duration metric: took 2.563660583s to createHost
	I0805 04:45:45.639129   10066 start.go:83] releasing machines lock for "auto-816000", held for 2.563948166s
	W0805 04:45:45.639205   10066 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:45:45.647635   10066 out.go:177] * Deleting "auto-816000" in qemu2 ...
	W0805 04:45:45.671705   10066 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:45:45.671732   10066 start.go:729] Will try again in 5 seconds ...
	I0805 04:45:50.672817   10066 start.go:360] acquireMachinesLock for auto-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:45:50.673174   10066 start.go:364] duration metric: took 305.5µs to acquireMachinesLock for "auto-816000"
	I0805 04:45:50.673213   10066 start.go:93] Provisioning new machine with config: &{Name:auto-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:45:50.673365   10066 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:45:50.680683   10066 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:45:50.720410   10066 start.go:159] libmachine.API.Create for "auto-816000" (driver="qemu2")
	I0805 04:45:50.720454   10066 client.go:168] LocalClient.Create starting
	I0805 04:45:50.720566   10066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:45:50.720635   10066 main.go:141] libmachine: Decoding PEM data...
	I0805 04:45:50.720649   10066 main.go:141] libmachine: Parsing certificate...
	I0805 04:45:50.720713   10066 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:45:50.720752   10066 main.go:141] libmachine: Decoding PEM data...
	I0805 04:45:50.720771   10066 main.go:141] libmachine: Parsing certificate...
	I0805 04:45:50.721270   10066 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:45:50.875332   10066 main.go:141] libmachine: Creating SSH key...
	I0805 04:45:51.029753   10066 main.go:141] libmachine: Creating Disk image...
	I0805 04:45:51.029760   10066 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:45:51.029980   10066 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2
	I0805 04:45:51.039986   10066 main.go:141] libmachine: STDOUT: 
	I0805 04:45:51.040020   10066 main.go:141] libmachine: STDERR: 
	I0805 04:45:51.040073   10066 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2 +20000M
	I0805 04:45:51.048203   10066 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:45:51.048226   10066 main.go:141] libmachine: STDERR: 
	I0805 04:45:51.048241   10066 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2
	I0805 04:45:51.048245   10066 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:45:51.048274   10066 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:45:51.048306   10066 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:2f:01:5e:e7:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/auto-816000/disk.qcow2
	I0805 04:45:51.050067   10066 main.go:141] libmachine: STDOUT: 
	I0805 04:45:51.050082   10066 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:45:51.050096   10066 client.go:171] duration metric: took 329.635125ms to LocalClient.Create
	I0805 04:45:53.052426   10066 start.go:128] duration metric: took 2.379004084s to createHost
	I0805 04:45:53.052496   10066 start.go:83] releasing machines lock for "auto-816000", held for 2.379279083s
	W0805 04:45:53.052846   10066 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:45:53.064276   10066 out.go:177] 
	W0805 04:45:53.068341   10066 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:45:53.068445   10066 out.go:239] * 
	* 
	W0805 04:45:53.070961   10066 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:45:53.075345   10066 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.678095s)

                                                
                                                
-- stdout --
	* [kindnet-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-816000" primary control-plane node in "kindnet-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:45:55.302064   10177 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:45:55.302189   10177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:45:55.302192   10177 out.go:304] Setting ErrFile to fd 2...
	I0805 04:45:55.302198   10177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:45:55.302360   10177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:45:55.303422   10177 out.go:298] Setting JSON to false
	I0805 04:45:55.320028   10177 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6325,"bootTime":1722852030,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:45:55.320124   10177 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:45:55.325662   10177 out.go:177] * [kindnet-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:45:55.331531   10177 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:45:55.331575   10177 notify.go:220] Checking for updates...
	I0805 04:45:55.338366   10177 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:45:55.341455   10177 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:45:55.344461   10177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:45:55.345970   10177 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:45:55.349498   10177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:45:55.352887   10177 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:45:55.352955   10177 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:45:55.353001   10177 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:45:55.357344   10177 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:45:55.364473   10177 start.go:297] selected driver: qemu2
	I0805 04:45:55.364483   10177 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:45:55.364499   10177 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:45:55.366837   10177 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:45:55.369436   10177 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:45:55.372518   10177 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:45:55.372553   10177 cni.go:84] Creating CNI manager for "kindnet"
	I0805 04:45:55.372557   10177 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 04:45:55.372588   10177 start.go:340] cluster config:
	{Name:kindnet-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:45:55.376423   10177 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:45:55.382405   10177 out.go:177] * Starting "kindnet-816000" primary control-plane node in "kindnet-816000" cluster
	I0805 04:45:55.386452   10177 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:45:55.386475   10177 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:45:55.386488   10177 cache.go:56] Caching tarball of preloaded images
	I0805 04:45:55.386547   10177 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:45:55.386553   10177 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:45:55.386610   10177 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/kindnet-816000/config.json ...
	I0805 04:45:55.386620   10177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/kindnet-816000/config.json: {Name:mk8750717962d01905505c29b4faa9a74229e577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:45:55.387114   10177 start.go:360] acquireMachinesLock for kindnet-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:45:55.387150   10177 start.go:364] duration metric: took 28.791µs to acquireMachinesLock for "kindnet-816000"
	I0805 04:45:55.387160   10177 start.go:93] Provisioning new machine with config: &{Name:kindnet-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:45:55.387198   10177 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:45:55.394473   10177 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:45:55.410868   10177 start.go:159] libmachine.API.Create for "kindnet-816000" (driver="qemu2")
	I0805 04:45:55.410898   10177 client.go:168] LocalClient.Create starting
	I0805 04:45:55.410959   10177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:45:55.410990   10177 main.go:141] libmachine: Decoding PEM data...
	I0805 04:45:55.411000   10177 main.go:141] libmachine: Parsing certificate...
	I0805 04:45:55.411036   10177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:45:55.411058   10177 main.go:141] libmachine: Decoding PEM data...
	I0805 04:45:55.411067   10177 main.go:141] libmachine: Parsing certificate...
	I0805 04:45:55.411438   10177 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:45:55.565709   10177 main.go:141] libmachine: Creating SSH key...
	I0805 04:45:55.599957   10177 main.go:141] libmachine: Creating Disk image...
	I0805 04:45:55.599963   10177 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:45:55.600131   10177 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2
	I0805 04:45:55.609587   10177 main.go:141] libmachine: STDOUT: 
	I0805 04:45:55.609607   10177 main.go:141] libmachine: STDERR: 
	I0805 04:45:55.609658   10177 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2 +20000M
	I0805 04:45:55.617982   10177 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:45:55.617999   10177 main.go:141] libmachine: STDERR: 
	I0805 04:45:55.618016   10177 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2
	I0805 04:45:55.618021   10177 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:45:55.618033   10177 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:45:55.618060   10177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:e1:66:b8:97:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2
	I0805 04:45:55.619767   10177 main.go:141] libmachine: STDOUT: 
	I0805 04:45:55.619781   10177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:45:55.619800   10177 client.go:171] duration metric: took 208.896375ms to LocalClient.Create
	I0805 04:45:57.622021   10177 start.go:128] duration metric: took 2.23477075s to createHost
	I0805 04:45:57.622112   10177 start.go:83] releasing machines lock for "kindnet-816000", held for 2.234933208s
	W0805 04:45:57.622210   10177 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:45:57.631756   10177 out.go:177] * Deleting "kindnet-816000" in qemu2 ...
	W0805 04:45:57.658957   10177 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:45:57.658980   10177 start.go:729] Will try again in 5 seconds ...
	I0805 04:46:02.661168   10177 start.go:360] acquireMachinesLock for kindnet-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:46:02.661533   10177 start.go:364] duration metric: took 273.5µs to acquireMachinesLock for "kindnet-816000"
	I0805 04:46:02.661613   10177 start.go:93] Provisioning new machine with config: &{Name:kindnet-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:46:02.661752   10177 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:46:02.670062   10177 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:46:02.700544   10177 start.go:159] libmachine.API.Create for "kindnet-816000" (driver="qemu2")
	I0805 04:46:02.700599   10177 client.go:168] LocalClient.Create starting
	I0805 04:46:02.700695   10177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:46:02.700749   10177 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:02.700763   10177 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:02.700816   10177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:46:02.700849   10177 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:02.700859   10177 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:02.701588   10177 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:46:02.853665   10177 main.go:141] libmachine: Creating SSH key...
	I0805 04:46:02.894999   10177 main.go:141] libmachine: Creating Disk image...
	I0805 04:46:02.895005   10177 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:46:02.895188   10177 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2
	I0805 04:46:02.904404   10177 main.go:141] libmachine: STDOUT: 
	I0805 04:46:02.904423   10177 main.go:141] libmachine: STDERR: 
	I0805 04:46:02.904489   10177 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2 +20000M
	I0805 04:46:02.912329   10177 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:46:02.912345   10177 main.go:141] libmachine: STDERR: 
	I0805 04:46:02.912363   10177 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2
	I0805 04:46:02.912368   10177 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:46:02.912383   10177 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:46:02.912408   10177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:38:fc:7a:d5:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kindnet-816000/disk.qcow2
	I0805 04:46:02.914063   10177 main.go:141] libmachine: STDOUT: 
	I0805 04:46:02.914079   10177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:46:02.914093   10177 client.go:171] duration metric: took 213.485ms to LocalClient.Create
	I0805 04:46:04.916286   10177 start.go:128] duration metric: took 2.254484s to createHost
	I0805 04:46:04.916388   10177 start.go:83] releasing machines lock for "kindnet-816000", held for 2.254805458s
	W0805 04:46:04.916738   10177 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:04.926859   10177 out.go:177] 
	W0805 04:46:04.930800   10177 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:46:04.930821   10177 out.go:239] * 
	* 
	W0805 04:46:04.932700   10177 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:46:04.941755   10177 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.773886583s)

                                                
                                                
-- stdout --
	* [flannel-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-816000" primary control-plane node in "flannel-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:46:07.242689   10290 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:46:07.242818   10290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:46:07.242821   10290 out.go:304] Setting ErrFile to fd 2...
	I0805 04:46:07.242823   10290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:46:07.242966   10290 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:46:07.244011   10290 out.go:298] Setting JSON to false
	I0805 04:46:07.260253   10290 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6337,"bootTime":1722852030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:46:07.260350   10290 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:46:07.266105   10290 out.go:177] * [flannel-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:46:07.272935   10290 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:46:07.273030   10290 notify.go:220] Checking for updates...
	I0805 04:46:07.280012   10290 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:46:07.282998   10290 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:46:07.285972   10290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:46:07.289004   10290 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:46:07.291989   10290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:46:07.295360   10290 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:46:07.295427   10290 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:46:07.295475   10290 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:46:07.299944   10290 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:46:07.306967   10290 start.go:297] selected driver: qemu2
	I0805 04:46:07.306973   10290 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:46:07.306981   10290 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:46:07.309196   10290 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:46:07.312061   10290 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:46:07.315029   10290 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:46:07.315052   10290 cni.go:84] Creating CNI manager for "flannel"
	I0805 04:46:07.315055   10290 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0805 04:46:07.315098   10290 start.go:340] cluster config:
	{Name:flannel-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:46:07.318705   10290 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:46:07.326026   10290 out.go:177] * Starting "flannel-816000" primary control-plane node in "flannel-816000" cluster
	I0805 04:46:07.329983   10290 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:46:07.329999   10290 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:46:07.330012   10290 cache.go:56] Caching tarball of preloaded images
	I0805 04:46:07.330082   10290 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:46:07.330087   10290 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:46:07.330154   10290 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/flannel-816000/config.json ...
	I0805 04:46:07.330165   10290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/flannel-816000/config.json: {Name:mk441601260756b447932227cde903b9ed6c55a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:46:07.330539   10290 start.go:360] acquireMachinesLock for flannel-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:46:07.330574   10290 start.go:364] duration metric: took 28.166µs to acquireMachinesLock for "flannel-816000"
	I0805 04:46:07.330589   10290 start.go:93] Provisioning new machine with config: &{Name:flannel-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:46:07.330614   10290 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:46:07.338907   10290 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:46:07.355433   10290 start.go:159] libmachine.API.Create for "flannel-816000" (driver="qemu2")
	I0805 04:46:07.355462   10290 client.go:168] LocalClient.Create starting
	I0805 04:46:07.355531   10290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:46:07.355564   10290 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:07.355575   10290 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:07.355620   10290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:46:07.355642   10290 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:07.355652   10290 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:07.356100   10290 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:46:07.504886   10290 main.go:141] libmachine: Creating SSH key...
	I0805 04:46:07.566040   10290 main.go:141] libmachine: Creating Disk image...
	I0805 04:46:07.566048   10290 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:46:07.566433   10290 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2
	I0805 04:46:07.575915   10290 main.go:141] libmachine: STDOUT: 
	I0805 04:46:07.575930   10290 main.go:141] libmachine: STDERR: 
	I0805 04:46:07.575970   10290 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2 +20000M
	I0805 04:46:07.583948   10290 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:46:07.583971   10290 main.go:141] libmachine: STDERR: 
	I0805 04:46:07.583985   10290 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2
	I0805 04:46:07.583989   10290 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:46:07.583999   10290 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:46:07.584025   10290 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:17:95:b4:05:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2
	I0805 04:46:07.585614   10290 main.go:141] libmachine: STDOUT: 
	I0805 04:46:07.585628   10290 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:46:07.585648   10290 client.go:171] duration metric: took 230.179041ms to LocalClient.Create
	I0805 04:46:09.587759   10290 start.go:128] duration metric: took 2.257111833s to createHost
	I0805 04:46:09.587803   10290 start.go:83] releasing machines lock for "flannel-816000", held for 2.257201458s
	W0805 04:46:09.587850   10290 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:09.597603   10290 out.go:177] * Deleting "flannel-816000" in qemu2 ...
	W0805 04:46:09.612796   10290 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:09.612809   10290 start.go:729] Will try again in 5 seconds ...
	I0805 04:46:14.615039   10290 start.go:360] acquireMachinesLock for flannel-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:46:14.615517   10290 start.go:364] duration metric: took 365.25µs to acquireMachinesLock for "flannel-816000"
	I0805 04:46:14.615647   10290 start.go:93] Provisioning new machine with config: &{Name:flannel-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:46:14.616019   10290 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:46:14.623683   10290 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:46:14.670911   10290 start.go:159] libmachine.API.Create for "flannel-816000" (driver="qemu2")
	I0805 04:46:14.670969   10290 client.go:168] LocalClient.Create starting
	I0805 04:46:14.671094   10290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:46:14.671170   10290 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:14.671184   10290 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:14.671246   10290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:46:14.671294   10290 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:14.671306   10290 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:14.672001   10290 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:46:14.830511   10290 main.go:141] libmachine: Creating SSH key...
	I0805 04:46:14.929888   10290 main.go:141] libmachine: Creating Disk image...
	I0805 04:46:14.929894   10290 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:46:14.930075   10290 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2
	I0805 04:46:14.939482   10290 main.go:141] libmachine: STDOUT: 
	I0805 04:46:14.939501   10290 main.go:141] libmachine: STDERR: 
	I0805 04:46:14.939544   10290 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2 +20000M
	I0805 04:46:14.947648   10290 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:46:14.947666   10290 main.go:141] libmachine: STDERR: 
	I0805 04:46:14.947675   10290 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2
	I0805 04:46:14.947679   10290 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:46:14.947701   10290 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:46:14.947722   10290 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:ba:4c:5c:46:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/flannel-816000/disk.qcow2
	I0805 04:46:14.949577   10290 main.go:141] libmachine: STDOUT: 
	I0805 04:46:14.949595   10290 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:46:14.949619   10290 client.go:171] duration metric: took 278.640208ms to LocalClient.Create
	I0805 04:46:16.951970   10290 start.go:128] duration metric: took 2.33584725s to createHost
	I0805 04:46:16.952076   10290 start.go:83] releasing machines lock for "flannel-816000", held for 2.33651225s
	W0805 04:46:16.952428   10290 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:16.963045   10290 out.go:177] 
	W0805 04:46:16.969146   10290 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:46:16.969173   10290 out.go:239] * 
	* 
	W0805 04:46:16.970809   10290 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:46:16.979073   10290 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.852385542s)

                                                
                                                
-- stdout --
	* [enable-default-cni-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-816000" primary control-plane node in "enable-default-cni-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:46:19.355951   10408 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:46:19.356089   10408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:46:19.356092   10408 out.go:304] Setting ErrFile to fd 2...
	I0805 04:46:19.356094   10408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:46:19.356217   10408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:46:19.357359   10408 out.go:298] Setting JSON to false
	I0805 04:46:19.373718   10408 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6349,"bootTime":1722852030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:46:19.373782   10408 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:46:19.378523   10408 out.go:177] * [enable-default-cni-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:46:19.380024   10408 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:46:19.380134   10408 notify.go:220] Checking for updates...
	I0805 04:46:19.389481   10408 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:46:19.392408   10408 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:46:19.395496   10408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:46:19.398455   10408 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:46:19.405316   10408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:46:19.408835   10408 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:46:19.408897   10408 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:46:19.408942   10408 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:46:19.413419   10408 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:46:19.420456   10408 start.go:297] selected driver: qemu2
	I0805 04:46:19.420462   10408 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:46:19.420474   10408 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:46:19.422773   10408 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:46:19.425407   10408 out.go:177] * Automatically selected the socket_vmnet network
	E0805 04:46:19.426813   10408 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0805 04:46:19.426823   10408 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:46:19.426847   10408 cni.go:84] Creating CNI manager for "bridge"
	I0805 04:46:19.426851   10408 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:46:19.426869   10408 start.go:340] cluster config:
	{Name:enable-default-cni-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:46:19.430506   10408 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:46:19.438502   10408 out.go:177] * Starting "enable-default-cni-816000" primary control-plane node in "enable-default-cni-816000" cluster
	I0805 04:46:19.442411   10408 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:46:19.442425   10408 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:46:19.442439   10408 cache.go:56] Caching tarball of preloaded images
	I0805 04:46:19.442494   10408 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:46:19.442499   10408 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:46:19.442574   10408 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/enable-default-cni-816000/config.json ...
	I0805 04:46:19.442584   10408 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/enable-default-cni-816000/config.json: {Name:mk3f1e8079fdf54cc30ee652cf01a90dbefae7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:46:19.442963   10408 start.go:360] acquireMachinesLock for enable-default-cni-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:46:19.442994   10408 start.go:364] duration metric: took 24µs to acquireMachinesLock for "enable-default-cni-816000"
	I0805 04:46:19.443004   10408 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:46:19.443029   10408 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:46:19.447443   10408 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:46:19.462201   10408 start.go:159] libmachine.API.Create for "enable-default-cni-816000" (driver="qemu2")
	I0805 04:46:19.462224   10408 client.go:168] LocalClient.Create starting
	I0805 04:46:19.462282   10408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:46:19.462313   10408 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:19.462321   10408 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:19.462355   10408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:46:19.462377   10408 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:19.462383   10408 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:19.462858   10408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:46:19.607870   10408 main.go:141] libmachine: Creating SSH key...
	I0805 04:46:19.795892   10408 main.go:141] libmachine: Creating Disk image...
	I0805 04:46:19.795905   10408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:46:19.796108   10408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2
	I0805 04:46:19.805918   10408 main.go:141] libmachine: STDOUT: 
	I0805 04:46:19.805944   10408 main.go:141] libmachine: STDERR: 
	I0805 04:46:19.806001   10408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2 +20000M
	I0805 04:46:19.814064   10408 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:46:19.814078   10408 main.go:141] libmachine: STDERR: 
	I0805 04:46:19.814103   10408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2
	I0805 04:46:19.814107   10408 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:46:19.814122   10408 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:46:19.814157   10408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:ae:b8:66:6e:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2
	I0805 04:46:19.815884   10408 main.go:141] libmachine: STDOUT: 
	I0805 04:46:19.815899   10408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:46:19.815917   10408 client.go:171] duration metric: took 353.68525ms to LocalClient.Create
	I0805 04:46:21.818255   10408 start.go:128] duration metric: took 2.375170042s to createHost
	I0805 04:46:21.818340   10408 start.go:83] releasing machines lock for "enable-default-cni-816000", held for 2.375311667s
	W0805 04:46:21.818449   10408 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:21.829723   10408 out.go:177] * Deleting "enable-default-cni-816000" in qemu2 ...
	W0805 04:46:21.857885   10408 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:21.857916   10408 start.go:729] Will try again in 5 seconds ...
	I0805 04:46:26.860210   10408 start.go:360] acquireMachinesLock for enable-default-cni-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:46:26.860734   10408 start.go:364] duration metric: took 399.292µs to acquireMachinesLock for "enable-default-cni-816000"
	I0805 04:46:26.860807   10408 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:46:26.861116   10408 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:46:26.870813   10408 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:46:26.924396   10408 start.go:159] libmachine.API.Create for "enable-default-cni-816000" (driver="qemu2")
	I0805 04:46:26.924452   10408 client.go:168] LocalClient.Create starting
	I0805 04:46:26.924571   10408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:46:26.924649   10408 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:26.924666   10408 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:26.924736   10408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:46:26.924784   10408 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:26.924802   10408 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:26.925376   10408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:46:27.084177   10408 main.go:141] libmachine: Creating SSH key...
	I0805 04:46:27.120452   10408 main.go:141] libmachine: Creating Disk image...
	I0805 04:46:27.120458   10408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:46:27.120648   10408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2
	I0805 04:46:27.129851   10408 main.go:141] libmachine: STDOUT: 
	I0805 04:46:27.129869   10408 main.go:141] libmachine: STDERR: 
	I0805 04:46:27.129911   10408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2 +20000M
	I0805 04:46:27.138114   10408 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:46:27.138130   10408 main.go:141] libmachine: STDERR: 
	I0805 04:46:27.138138   10408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2
	I0805 04:46:27.138149   10408 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:46:27.138159   10408 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:46:27.138200   10408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:fd:e2:40:6a:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/enable-default-cni-816000/disk.qcow2
	I0805 04:46:27.139992   10408 main.go:141] libmachine: STDOUT: 
	I0805 04:46:27.140007   10408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:46:27.140017   10408 client.go:171] duration metric: took 215.555375ms to LocalClient.Create
	I0805 04:46:29.142124   10408 start.go:128] duration metric: took 2.280970333s to createHost
	I0805 04:46:29.142147   10408 start.go:83] releasing machines lock for "enable-default-cni-816000", held for 2.281368084s
	W0805 04:46:29.142266   10408 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:29.151454   10408 out.go:177] 
	W0805 04:46:29.158420   10408 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:46:29.158441   10408 out.go:239] * 
	* 
	W0805 04:46:29.159637   10408 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:46:29.175451   10408 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.104110625s)

                                                
                                                
-- stdout --
	* [bridge-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-816000" primary control-plane node in "bridge-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:46:31.351512   10517 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:46:31.351629   10517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:46:31.351633   10517 out.go:304] Setting ErrFile to fd 2...
	I0805 04:46:31.351635   10517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:46:31.351767   10517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:46:31.352904   10517 out.go:298] Setting JSON to false
	I0805 04:46:31.369985   10517 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6361,"bootTime":1722852030,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:46:31.370051   10517 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:46:31.375111   10517 out.go:177] * [bridge-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:46:31.382139   10517 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:46:31.382220   10517 notify.go:220] Checking for updates...
	I0805 04:46:31.389054   10517 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:46:31.392072   10517 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:46:31.395108   10517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:46:31.398119   10517 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:46:31.401103   10517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:46:31.404475   10517 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:46:31.404552   10517 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:46:31.404603   10517 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:46:31.409079   10517 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:46:31.415108   10517 start.go:297] selected driver: qemu2
	I0805 04:46:31.415117   10517 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:46:31.415123   10517 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:46:31.417733   10517 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:46:31.421085   10517 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:46:31.424190   10517 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:46:31.424226   10517 cni.go:84] Creating CNI manager for "bridge"
	I0805 04:46:31.424231   10517 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:46:31.424265   10517 start.go:340] cluster config:
	{Name:bridge-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:46:31.428907   10517 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:46:31.436083   10517 out.go:177] * Starting "bridge-816000" primary control-plane node in "bridge-816000" cluster
	I0805 04:46:31.440114   10517 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:46:31.440146   10517 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:46:31.440159   10517 cache.go:56] Caching tarball of preloaded images
	I0805 04:46:31.440257   10517 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:46:31.440264   10517 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:46:31.440327   10517 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/bridge-816000/config.json ...
	I0805 04:46:31.440340   10517 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/bridge-816000/config.json: {Name:mk1cfbf2b1d75d5b8af249e8b588294edd048865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:46:31.440839   10517 start.go:360] acquireMachinesLock for bridge-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:46:31.440881   10517 start.go:364] duration metric: took 35.167µs to acquireMachinesLock for "bridge-816000"
	I0805 04:46:31.440893   10517 start.go:93] Provisioning new machine with config: &{Name:bridge-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:46:31.440924   10517 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:46:31.445114   10517 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:46:31.461560   10517 start.go:159] libmachine.API.Create for "bridge-816000" (driver="qemu2")
	I0805 04:46:31.461589   10517 client.go:168] LocalClient.Create starting
	I0805 04:46:31.461684   10517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:46:31.461718   10517 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:31.461726   10517 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:31.461778   10517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:46:31.461800   10517 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:31.461807   10517 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:31.462170   10517 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:46:31.611846   10517 main.go:141] libmachine: Creating SSH key...
	I0805 04:46:31.815693   10517 main.go:141] libmachine: Creating Disk image...
	I0805 04:46:31.815702   10517 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:46:31.815916   10517 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2
	I0805 04:46:31.825308   10517 main.go:141] libmachine: STDOUT: 
	I0805 04:46:31.825327   10517 main.go:141] libmachine: STDERR: 
	I0805 04:46:31.825377   10517 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2 +20000M
	I0805 04:46:31.833439   10517 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:46:31.833454   10517 main.go:141] libmachine: STDERR: 
	I0805 04:46:31.833468   10517 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2
	I0805 04:46:31.833472   10517 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:46:31.833485   10517 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:46:31.833508   10517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:4a:75:b5:dd:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2
	I0805 04:46:31.835161   10517 main.go:141] libmachine: STDOUT: 
	I0805 04:46:31.835176   10517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:46:31.835196   10517 client.go:171] duration metric: took 373.598167ms to LocalClient.Create
	I0805 04:46:33.837444   10517 start.go:128] duration metric: took 2.396467875s to createHost
	I0805 04:46:33.837660   10517 start.go:83] releasing machines lock for "bridge-816000", held for 2.396657417s
	W0805 04:46:33.837740   10517 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:33.843797   10517 out.go:177] * Deleting "bridge-816000" in qemu2 ...
	W0805 04:46:33.870269   10517 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:33.870305   10517 start.go:729] Will try again in 5 seconds ...
	I0805 04:46:38.872548   10517 start.go:360] acquireMachinesLock for bridge-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:46:38.873170   10517 start.go:364] duration metric: took 525.208µs to acquireMachinesLock for "bridge-816000"
	I0805 04:46:38.873327   10517 start.go:93] Provisioning new machine with config: &{Name:bridge-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:46:38.873605   10517 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:46:38.883027   10517 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:46:38.934340   10517 start.go:159] libmachine.API.Create for "bridge-816000" (driver="qemu2")
	I0805 04:46:38.934388   10517 client.go:168] LocalClient.Create starting
	I0805 04:46:38.934515   10517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:46:38.934589   10517 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:38.934608   10517 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:38.934680   10517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:46:38.934733   10517 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:38.934745   10517 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:38.935330   10517 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:46:39.094002   10517 main.go:141] libmachine: Creating SSH key...
	I0805 04:46:39.361585   10517 main.go:141] libmachine: Creating Disk image...
	I0805 04:46:39.361600   10517 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:46:39.361823   10517 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2
	I0805 04:46:39.372307   10517 main.go:141] libmachine: STDOUT: 
	I0805 04:46:39.372331   10517 main.go:141] libmachine: STDERR: 
	I0805 04:46:39.372402   10517 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2 +20000M
	I0805 04:46:39.381623   10517 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:46:39.381644   10517 main.go:141] libmachine: STDERR: 
	I0805 04:46:39.381661   10517 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2
	I0805 04:46:39.381665   10517 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:46:39.381677   10517 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:46:39.381718   10517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:e7:c0:1b:8e:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/bridge-816000/disk.qcow2
	I0805 04:46:39.383784   10517 main.go:141] libmachine: STDOUT: 
	I0805 04:46:39.383800   10517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:46:39.383814   10517 client.go:171] duration metric: took 449.413791ms to LocalClient.Create
	I0805 04:46:41.385964   10517 start.go:128] duration metric: took 2.512256083s to createHost
	I0805 04:46:41.386071   10517 start.go:83] releasing machines lock for "bridge-816000", held for 2.512851583s
	W0805 04:46:41.386421   10517 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:41.398129   10517 out.go:177] 
	W0805 04:46:41.402230   10517 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:46:41.402258   10517 out.go:239] * 
	* 
	W0805 04:46:41.404897   10517 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:46:41.412134   10517 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.730365208s)

                                                
                                                
-- stdout --
	* [kubenet-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-816000" primary control-plane node in "kubenet-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:46:43.558746   10626 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:46:43.558869   10626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:46:43.558873   10626 out.go:304] Setting ErrFile to fd 2...
	I0805 04:46:43.558875   10626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:46:43.558994   10626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:46:43.560078   10626 out.go:298] Setting JSON to false
	I0805 04:46:43.576730   10626 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6373,"bootTime":1722852030,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:46:43.576796   10626 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:46:43.580765   10626 out.go:177] * [kubenet-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:46:43.587728   10626 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:46:43.587794   10626 notify.go:220] Checking for updates...
	I0805 04:46:43.594600   10626 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:46:43.597696   10626 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:46:43.600682   10626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:46:43.603677   10626 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:46:43.606683   10626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:46:43.609842   10626 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:46:43.609913   10626 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:46:43.609961   10626 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:46:43.613630   10626 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:46:43.619602   10626 start.go:297] selected driver: qemu2
	I0805 04:46:43.619607   10626 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:46:43.619612   10626 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:46:43.621828   10626 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:46:43.624684   10626 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:46:43.627703   10626 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:46:43.627732   10626 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0805 04:46:43.627763   10626 start.go:340] cluster config:
	{Name:kubenet-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:46:43.631241   10626 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:46:43.638645   10626 out.go:177] * Starting "kubenet-816000" primary control-plane node in "kubenet-816000" cluster
	I0805 04:46:43.642705   10626 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:46:43.642717   10626 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:46:43.642731   10626 cache.go:56] Caching tarball of preloaded images
	I0805 04:46:43.642774   10626 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:46:43.642779   10626 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:46:43.642827   10626 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/kubenet-816000/config.json ...
	I0805 04:46:43.642837   10626 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/kubenet-816000/config.json: {Name:mk29221318b407c073b8acf6985552bd59d769e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:46:43.643184   10626 start.go:360] acquireMachinesLock for kubenet-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:46:43.643218   10626 start.go:364] duration metric: took 27.333µs to acquireMachinesLock for "kubenet-816000"
	I0805 04:46:43.643228   10626 start.go:93] Provisioning new machine with config: &{Name:kubenet-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:46:43.643254   10626 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:46:43.651701   10626 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:46:43.666916   10626 start.go:159] libmachine.API.Create for "kubenet-816000" (driver="qemu2")
	I0805 04:46:43.666944   10626 client.go:168] LocalClient.Create starting
	I0805 04:46:43.667001   10626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:46:43.667031   10626 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:43.667040   10626 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:43.667082   10626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:46:43.667105   10626 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:43.667113   10626 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:43.667458   10626 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:46:43.813890   10626 main.go:141] libmachine: Creating SSH key...
	I0805 04:46:43.856928   10626 main.go:141] libmachine: Creating Disk image...
	I0805 04:46:43.856937   10626 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:46:43.857111   10626 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2
	I0805 04:46:43.866206   10626 main.go:141] libmachine: STDOUT: 
	I0805 04:46:43.866224   10626 main.go:141] libmachine: STDERR: 
	I0805 04:46:43.866268   10626 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2 +20000M
	I0805 04:46:43.874124   10626 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:46:43.874136   10626 main.go:141] libmachine: STDERR: 
	I0805 04:46:43.874148   10626 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2
	I0805 04:46:43.874153   10626 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:46:43.874164   10626 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:46:43.874196   10626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e1:90:b5:e2:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2
	I0805 04:46:43.875786   10626 main.go:141] libmachine: STDOUT: 
	I0805 04:46:43.875800   10626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:46:43.875816   10626 client.go:171] duration metric: took 208.86675ms to LocalClient.Create
	I0805 04:46:45.878053   10626 start.go:128] duration metric: took 2.234746125s to createHost
	I0805 04:46:45.878118   10626 start.go:83] releasing machines lock for "kubenet-816000", held for 2.234869709s
	W0805 04:46:45.878222   10626 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:45.885522   10626 out.go:177] * Deleting "kubenet-816000" in qemu2 ...
	W0805 04:46:45.912568   10626 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:45.912602   10626 start.go:729] Will try again in 5 seconds ...
	I0805 04:46:50.914930   10626 start.go:360] acquireMachinesLock for kubenet-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:46:50.915519   10626 start.go:364] duration metric: took 431.041µs to acquireMachinesLock for "kubenet-816000"
	I0805 04:46:50.915585   10626 start.go:93] Provisioning new machine with config: &{Name:kubenet-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:46:50.915893   10626 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:46:50.925421   10626 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:46:50.975587   10626 start.go:159] libmachine.API.Create for "kubenet-816000" (driver="qemu2")
	I0805 04:46:50.975636   10626 client.go:168] LocalClient.Create starting
	I0805 04:46:50.975755   10626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:46:50.975826   10626 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:50.975845   10626 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:50.975908   10626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:46:50.975953   10626 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:50.975965   10626 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:50.976516   10626 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:46:51.133739   10626 main.go:141] libmachine: Creating SSH key...
	I0805 04:46:51.204996   10626 main.go:141] libmachine: Creating Disk image...
	I0805 04:46:51.205005   10626 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:46:51.205183   10626 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2
	I0805 04:46:51.214663   10626 main.go:141] libmachine: STDOUT: 
	I0805 04:46:51.214679   10626 main.go:141] libmachine: STDERR: 
	I0805 04:46:51.214737   10626 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2 +20000M
	I0805 04:46:51.222571   10626 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:46:51.222592   10626 main.go:141] libmachine: STDERR: 
	I0805 04:46:51.222607   10626 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2
	I0805 04:46:51.222613   10626 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:46:51.222623   10626 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:46:51.222655   10626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:03:6a:7a:75:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/kubenet-816000/disk.qcow2
	I0805 04:46:51.224364   10626 main.go:141] libmachine: STDOUT: 
	I0805 04:46:51.224378   10626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:46:51.224394   10626 client.go:171] duration metric: took 248.749208ms to LocalClient.Create
	I0805 04:46:53.226629   10626 start.go:128] duration metric: took 2.310676167s to createHost
	I0805 04:46:53.226713   10626 start.go:83] releasing machines lock for "kubenet-816000", held for 2.311147s
	W0805 04:46:53.227122   10626 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:53.236763   10626 out.go:177] 
	W0805 04:46:53.242787   10626 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:46:53.242822   10626 out.go:239] * 
	* 
	W0805 04:46:53.244614   10626 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:46:53.249728   10626 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.8921405s)

                                                
                                                
-- stdout --
	* [custom-flannel-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-816000" primary control-plane node in "custom-flannel-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:46:55.398567   10735 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:46:55.398698   10735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:46:55.398702   10735 out.go:304] Setting ErrFile to fd 2...
	I0805 04:46:55.398705   10735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:46:55.398845   10735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:46:55.399950   10735 out.go:298] Setting JSON to false
	I0805 04:46:55.416418   10735 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6385,"bootTime":1722852030,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:46:55.416496   10735 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:46:55.420660   10735 out.go:177] * [custom-flannel-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:46:55.426551   10735 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:46:55.426596   10735 notify.go:220] Checking for updates...
	I0805 04:46:55.433525   10735 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:46:55.436484   10735 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:46:55.439481   10735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:46:55.440992   10735 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:46:55.444483   10735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:46:55.447826   10735 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:46:55.447888   10735 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:46:55.447934   10735 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:46:55.451333   10735 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:46:55.458487   10735 start.go:297] selected driver: qemu2
	I0805 04:46:55.458493   10735 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:46:55.458498   10735 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:46:55.460749   10735 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:46:55.463557   10735 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:46:55.466519   10735 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:46:55.466546   10735 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0805 04:46:55.466555   10735 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0805 04:46:55.466585   10735 start.go:340] cluster config:
	{Name:custom-flannel-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:46:55.470147   10735 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:46:55.476474   10735 out.go:177] * Starting "custom-flannel-816000" primary control-plane node in "custom-flannel-816000" cluster
	I0805 04:46:55.480466   10735 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:46:55.480480   10735 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:46:55.480489   10735 cache.go:56] Caching tarball of preloaded images
	I0805 04:46:55.480546   10735 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:46:55.480552   10735 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:46:55.480611   10735 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/custom-flannel-816000/config.json ...
	I0805 04:46:55.480622   10735 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/custom-flannel-816000/config.json: {Name:mkf5555b0d89342b06a8825e3187fa839a298a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:46:55.481019   10735 start.go:360] acquireMachinesLock for custom-flannel-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:46:55.481054   10735 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "custom-flannel-816000"
	I0805 04:46:55.481064   10735 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:46:55.481094   10735 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:46:55.484467   10735 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:46:55.499880   10735 start.go:159] libmachine.API.Create for "custom-flannel-816000" (driver="qemu2")
	I0805 04:46:55.499906   10735 client.go:168] LocalClient.Create starting
	I0805 04:46:55.499965   10735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:46:55.499994   10735 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:55.500003   10735 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:55.500039   10735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:46:55.500061   10735 main.go:141] libmachine: Decoding PEM data...
	I0805 04:46:55.500068   10735 main.go:141] libmachine: Parsing certificate...
	I0805 04:46:55.500392   10735 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:46:55.647982   10735 main.go:141] libmachine: Creating SSH key...
	I0805 04:46:55.738649   10735 main.go:141] libmachine: Creating Disk image...
	I0805 04:46:55.738657   10735 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:46:55.738855   10735 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2
	I0805 04:46:55.748241   10735 main.go:141] libmachine: STDOUT: 
	I0805 04:46:55.748261   10735 main.go:141] libmachine: STDERR: 
	I0805 04:46:55.748313   10735 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2 +20000M
	I0805 04:46:55.756464   10735 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:46:55.756478   10735 main.go:141] libmachine: STDERR: 
	I0805 04:46:55.756494   10735 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2
	I0805 04:46:55.756497   10735 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:46:55.756510   10735 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:46:55.756535   10735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:19:2d:fc:0c:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2
	I0805 04:46:55.758321   10735 main.go:141] libmachine: STDOUT: 
	I0805 04:46:55.758339   10735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:46:55.758355   10735 client.go:171] duration metric: took 258.443042ms to LocalClient.Create
	I0805 04:46:57.760550   10735 start.go:128] duration metric: took 2.279408916s to createHost
	I0805 04:46:57.760624   10735 start.go:83] releasing machines lock for "custom-flannel-816000", held for 2.279540416s
	W0805 04:46:57.760681   10735 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:57.765143   10735 out.go:177] * Deleting "custom-flannel-816000" in qemu2 ...
	W0805 04:46:57.788265   10735 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:46:57.788290   10735 start.go:729] Will try again in 5 seconds ...
	I0805 04:47:02.790546   10735 start.go:360] acquireMachinesLock for custom-flannel-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:02.790862   10735 start.go:364] duration metric: took 232.292µs to acquireMachinesLock for "custom-flannel-816000"
	I0805 04:47:02.790940   10735 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:47:02.791089   10735 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:47:02.798408   10735 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:47:02.827997   10735 start.go:159] libmachine.API.Create for "custom-flannel-816000" (driver="qemu2")
	I0805 04:47:02.828052   10735 client.go:168] LocalClient.Create starting
	I0805 04:47:02.828137   10735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:47:02.828190   10735 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:02.828201   10735 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:02.828250   10735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:47:02.828291   10735 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:02.828300   10735 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:02.828937   10735 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:47:02.981668   10735 main.go:141] libmachine: Creating SSH key...
	I0805 04:47:03.198797   10735 main.go:141] libmachine: Creating Disk image...
	I0805 04:47:03.198809   10735 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:47:03.199052   10735 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2
	I0805 04:47:03.209702   10735 main.go:141] libmachine: STDOUT: 
	I0805 04:47:03.209730   10735 main.go:141] libmachine: STDERR: 
	I0805 04:47:03.209792   10735 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2 +20000M
	I0805 04:47:03.219075   10735 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:47:03.219093   10735 main.go:141] libmachine: STDERR: 
	I0805 04:47:03.219116   10735 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2
	I0805 04:47:03.219122   10735 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:47:03.219135   10735 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:03.219162   10735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:67:e3:c7:52:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/custom-flannel-816000/disk.qcow2
	I0805 04:47:03.221284   10735 main.go:141] libmachine: STDOUT: 
	I0805 04:47:03.221301   10735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:03.221326   10735 client.go:171] duration metric: took 393.26625ms to LocalClient.Create
	I0805 04:47:05.223456   10735 start.go:128] duration metric: took 2.432324916s to createHost
	I0805 04:47:05.223527   10735 start.go:83] releasing machines lock for "custom-flannel-816000", held for 2.432625709s
	W0805 04:47:05.223663   10735 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:05.236037   10735 out.go:177] 
	W0805 04:47:05.240066   10735 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:47:05.240085   10735 out.go:239] * 
	* 
	W0805 04:47:05.241019   10735 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:47:05.253000   10735 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.852443375s)

                                                
                                                
-- stdout --
	* [calico-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-816000" primary control-plane node in "calico-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:47:07.558230   10854 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:47:07.561583   10854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:07.561587   10854 out.go:304] Setting ErrFile to fd 2...
	I0805 04:47:07.561591   10854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:07.561732   10854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:47:07.563012   10854 out.go:298] Setting JSON to false
	I0805 04:47:07.579842   10854 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6397,"bootTime":1722852030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:47:07.579915   10854 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:47:07.584441   10854 out.go:177] * [calico-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:47:07.591569   10854 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:47:07.591693   10854 notify.go:220] Checking for updates...
	I0805 04:47:07.597499   10854 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:47:07.600536   10854 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:47:07.601950   10854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:47:07.605475   10854 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:47:07.608533   10854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:47:07.611946   10854 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:47:07.612009   10854 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:47:07.612063   10854 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:47:07.615502   10854 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:47:07.622406   10854 start.go:297] selected driver: qemu2
	I0805 04:47:07.622412   10854 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:47:07.622417   10854 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:47:07.624813   10854 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:47:07.627545   10854 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:47:07.630599   10854 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:47:07.630631   10854 cni.go:84] Creating CNI manager for "calico"
	I0805 04:47:07.630635   10854 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0805 04:47:07.630658   10854 start.go:340] cluster config:
	{Name:calico-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:47:07.634220   10854 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:07.641486   10854 out.go:177] * Starting "calico-816000" primary control-plane node in "calico-816000" cluster
	I0805 04:47:07.645548   10854 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:47:07.645559   10854 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:47:07.645568   10854 cache.go:56] Caching tarball of preloaded images
	I0805 04:47:07.645614   10854 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:47:07.645618   10854 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:47:07.645664   10854 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/calico-816000/config.json ...
	I0805 04:47:07.645674   10854 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/calico-816000/config.json: {Name:mk48c3d2221d6553d7f84dfe0f21c08ae9749b9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:47:07.645877   10854 start.go:360] acquireMachinesLock for calico-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:07.645908   10854 start.go:364] duration metric: took 26.125µs to acquireMachinesLock for "calico-816000"
	I0805 04:47:07.645918   10854 start.go:93] Provisioning new machine with config: &{Name:calico-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:47:07.645950   10854 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:47:07.653515   10854 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:47:07.669178   10854 start.go:159] libmachine.API.Create for "calico-816000" (driver="qemu2")
	I0805 04:47:07.669210   10854 client.go:168] LocalClient.Create starting
	I0805 04:47:07.669275   10854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:47:07.669309   10854 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:07.669319   10854 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:07.669355   10854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:47:07.669378   10854 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:07.669386   10854 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:07.669742   10854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:47:07.816501   10854 main.go:141] libmachine: Creating SSH key...
	I0805 04:47:07.947612   10854 main.go:141] libmachine: Creating Disk image...
	I0805 04:47:07.947618   10854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:47:07.947815   10854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2
	I0805 04:47:07.957254   10854 main.go:141] libmachine: STDOUT: 
	I0805 04:47:07.957270   10854 main.go:141] libmachine: STDERR: 
	I0805 04:47:07.957329   10854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2 +20000M
	I0805 04:47:07.965340   10854 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:47:07.965355   10854 main.go:141] libmachine: STDERR: 
	I0805 04:47:07.965369   10854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2
	I0805 04:47:07.965372   10854 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:47:07.965386   10854 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:07.965422   10854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:29:04:84:66:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2
	I0805 04:47:07.967127   10854 main.go:141] libmachine: STDOUT: 
	I0805 04:47:07.967145   10854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:07.967175   10854 client.go:171] duration metric: took 297.943875ms to LocalClient.Create
	I0805 04:47:09.969279   10854 start.go:128] duration metric: took 2.323294833s to createHost
	I0805 04:47:09.969318   10854 start.go:83] releasing machines lock for "calico-816000", held for 2.323382584s
	W0805 04:47:09.969344   10854 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:09.976910   10854 out.go:177] * Deleting "calico-816000" in qemu2 ...
	W0805 04:47:09.995353   10854 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:09.995362   10854 start.go:729] Will try again in 5 seconds ...
	I0805 04:47:14.997718   10854 start.go:360] acquireMachinesLock for calico-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:14.998369   10854 start.go:364] duration metric: took 491.459µs to acquireMachinesLock for "calico-816000"
	I0805 04:47:14.998450   10854 start.go:93] Provisioning new machine with config: &{Name:calico-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:47:14.998744   10854 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:47:15.007442   10854 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:47:15.059351   10854 start.go:159] libmachine.API.Create for "calico-816000" (driver="qemu2")
	I0805 04:47:15.059395   10854 client.go:168] LocalClient.Create starting
	I0805 04:47:15.059515   10854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:47:15.059587   10854 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:15.059603   10854 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:15.059682   10854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:47:15.059727   10854 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:15.059747   10854 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:15.060329   10854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:47:15.217687   10854 main.go:141] libmachine: Creating SSH key...
	I0805 04:47:15.318620   10854 main.go:141] libmachine: Creating Disk image...
	I0805 04:47:15.318626   10854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:47:15.318820   10854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2
	I0805 04:47:15.328160   10854 main.go:141] libmachine: STDOUT: 
	I0805 04:47:15.328176   10854 main.go:141] libmachine: STDERR: 
	I0805 04:47:15.328242   10854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2 +20000M
	I0805 04:47:15.336242   10854 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:47:15.336255   10854 main.go:141] libmachine: STDERR: 
	I0805 04:47:15.336271   10854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2
	I0805 04:47:15.336277   10854 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:47:15.336291   10854 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:15.336319   10854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:c1:8d:75:66:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/calico-816000/disk.qcow2
	I0805 04:47:15.337991   10854 main.go:141] libmachine: STDOUT: 
	I0805 04:47:15.338004   10854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:15.338017   10854 client.go:171] duration metric: took 278.6145ms to LocalClient.Create
	I0805 04:47:17.340349   10854 start.go:128] duration metric: took 2.341417s to createHost
	I0805 04:47:17.340427   10854 start.go:83] releasing machines lock for "calico-816000", held for 2.342011083s
	W0805 04:47:17.340774   10854 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:17.354410   10854 out.go:177] 
	W0805 04:47:17.358612   10854 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:47:17.358636   10854 out.go:239] * 
	* 
	W0805 04:47:17.360983   10854 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:47:17.369485   10854 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-816000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.85150825s)

                                                
                                                
-- stdout --
	* [false-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-816000" primary control-plane node in "false-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:47:19.828028   10972 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:47:19.828179   10972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:19.828182   10972 out.go:304] Setting ErrFile to fd 2...
	I0805 04:47:19.828188   10972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:19.828328   10972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:47:19.829355   10972 out.go:298] Setting JSON to false
	I0805 04:47:19.846385   10972 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6409,"bootTime":1722852030,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:47:19.846480   10972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:47:19.851105   10972 out.go:177] * [false-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:47:19.856893   10972 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:47:19.856957   10972 notify.go:220] Checking for updates...
	I0805 04:47:19.863921   10972 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:47:19.866901   10972 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:47:19.869959   10972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:47:19.877902   10972 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:47:19.881908   10972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:47:19.886245   10972 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:47:19.886313   10972 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:47:19.886354   10972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:47:19.889934   10972 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:47:19.897770   10972 start.go:297] selected driver: qemu2
	I0805 04:47:19.897776   10972 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:47:19.897781   10972 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:47:19.900099   10972 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:47:19.903943   10972 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:47:19.907929   10972 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:47:19.907940   10972 cni.go:84] Creating CNI manager for "false"
	I0805 04:47:19.907962   10972 start.go:340] cluster config:
	{Name:false-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:47:19.911369   10972 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:19.915904   10972 out.go:177] * Starting "false-816000" primary control-plane node in "false-816000" cluster
	I0805 04:47:19.923750   10972 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:47:19.923768   10972 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:47:19.923779   10972 cache.go:56] Caching tarball of preloaded images
	I0805 04:47:19.923836   10972 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:47:19.923841   10972 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:47:19.923897   10972 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/false-816000/config.json ...
	I0805 04:47:19.923906   10972 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/false-816000/config.json: {Name:mkc7e724da3ab0ec8925cd85580c961229a0c5ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:47:19.924296   10972 start.go:360] acquireMachinesLock for false-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:19.924326   10972 start.go:364] duration metric: took 25µs to acquireMachinesLock for "false-816000"
	I0805 04:47:19.924335   10972 start.go:93] Provisioning new machine with config: &{Name:false-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:47:19.924373   10972 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:47:19.932761   10972 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:47:19.948182   10972 start.go:159] libmachine.API.Create for "false-816000" (driver="qemu2")
	I0805 04:47:19.948203   10972 client.go:168] LocalClient.Create starting
	I0805 04:47:19.948281   10972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:47:19.948314   10972 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:19.948323   10972 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:19.948360   10972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:47:19.948383   10972 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:19.948390   10972 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:19.948915   10972 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:47:20.096895   10972 main.go:141] libmachine: Creating SSH key...
	I0805 04:47:20.242762   10972 main.go:141] libmachine: Creating Disk image...
	I0805 04:47:20.242769   10972 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:47:20.242976   10972 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2
	I0805 04:47:20.252556   10972 main.go:141] libmachine: STDOUT: 
	I0805 04:47:20.252576   10972 main.go:141] libmachine: STDERR: 
	I0805 04:47:20.252620   10972 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2 +20000M
	I0805 04:47:20.260684   10972 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:47:20.260699   10972 main.go:141] libmachine: STDERR: 
	I0805 04:47:20.260714   10972 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2
	I0805 04:47:20.260726   10972 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:47:20.260737   10972 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:20.260766   10972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b8:91:88:ee:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2
	I0805 04:47:20.262471   10972 main.go:141] libmachine: STDOUT: 
	I0805 04:47:20.262486   10972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:20.262505   10972 client.go:171] duration metric: took 314.294584ms to LocalClient.Create
	I0805 04:47:22.264735   10972 start.go:128] duration metric: took 2.340312625s to createHost
	I0805 04:47:22.264847   10972 start.go:83] releasing machines lock for "false-816000", held for 2.340488791s
	W0805 04:47:22.264920   10972 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:22.279408   10972 out.go:177] * Deleting "false-816000" in qemu2 ...
	W0805 04:47:22.304803   10972 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:22.304834   10972 start.go:729] Will try again in 5 seconds ...
	I0805 04:47:27.306455   10972 start.go:360] acquireMachinesLock for false-816000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:27.307007   10972 start.go:364] duration metric: took 417.834µs to acquireMachinesLock for "false-816000"
	I0805 04:47:27.307084   10972 start.go:93] Provisioning new machine with config: &{Name:false-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:47:27.307369   10972 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:47:27.316835   10972 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 04:47:27.364676   10972 start.go:159] libmachine.API.Create for "false-816000" (driver="qemu2")
	I0805 04:47:27.364726   10972 client.go:168] LocalClient.Create starting
	I0805 04:47:27.364854   10972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:47:27.364924   10972 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:27.364954   10972 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:27.365029   10972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:47:27.365075   10972 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:27.365089   10972 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:27.365697   10972 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:47:27.519984   10972 main.go:141] libmachine: Creating SSH key...
	I0805 04:47:27.598774   10972 main.go:141] libmachine: Creating Disk image...
	I0805 04:47:27.598783   10972 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:47:27.599366   10972 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2
	I0805 04:47:27.608638   10972 main.go:141] libmachine: STDOUT: 
	I0805 04:47:27.608658   10972 main.go:141] libmachine: STDERR: 
	I0805 04:47:27.608712   10972 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2 +20000M
	I0805 04:47:27.616849   10972 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:47:27.616867   10972 main.go:141] libmachine: STDERR: 
	I0805 04:47:27.616877   10972 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2
	I0805 04:47:27.616881   10972 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:47:27.616898   10972 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:27.616931   10972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:62:c7:5e:2d:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/false-816000/disk.qcow2
	I0805 04:47:27.618668   10972 main.go:141] libmachine: STDOUT: 
	I0805 04:47:27.618684   10972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:27.618696   10972 client.go:171] duration metric: took 253.9605ms to LocalClient.Create
	I0805 04:47:29.619659   10972 start.go:128] duration metric: took 2.312239291s to createHost
	I0805 04:47:29.619699   10972 start.go:83] releasing machines lock for "false-816000", held for 2.312648584s
	W0805 04:47:29.619917   10972 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:29.628135   10972 out.go:177] 
	W0805 04:47:29.632261   10972 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:47:29.632272   10972 out.go:239] * 
	* 
	W0805 04:47:29.633171   10972 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:47:29.641165   10972 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-207000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-207000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.78622725s)

                                                
                                                
-- stdout --
	* [old-k8s-version-207000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-207000" primary control-plane node in "old-k8s-version-207000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-207000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:47:31.813649   11083 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:47:31.813778   11083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:31.813781   11083 out.go:304] Setting ErrFile to fd 2...
	I0805 04:47:31.813784   11083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:31.813929   11083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:47:31.815143   11083 out.go:298] Setting JSON to false
	I0805 04:47:31.832150   11083 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6421,"bootTime":1722852030,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:47:31.832224   11083 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:47:31.837267   11083 out.go:177] * [old-k8s-version-207000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:47:31.844306   11083 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:47:31.844382   11083 notify.go:220] Checking for updates...
	I0805 04:47:31.852299   11083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:47:31.855418   11083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:47:31.858249   11083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:47:31.861284   11083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:47:31.864368   11083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:47:31.867544   11083 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:47:31.867617   11083 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:47:31.867662   11083 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:47:31.872297   11083 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:47:31.879246   11083 start.go:297] selected driver: qemu2
	I0805 04:47:31.879252   11083 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:47:31.879258   11083 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:47:31.881521   11083 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:47:31.884324   11083 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:47:31.887357   11083 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:47:31.887392   11083 cni.go:84] Creating CNI manager for ""
	I0805 04:47:31.887398   11083 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 04:47:31.887425   11083 start.go:340] cluster config:
	{Name:old-k8s-version-207000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-207000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:47:31.890904   11083 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:31.898259   11083 out.go:177] * Starting "old-k8s-version-207000" primary control-plane node in "old-k8s-version-207000" cluster
	I0805 04:47:31.902282   11083 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 04:47:31.902297   11083 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 04:47:31.902313   11083 cache.go:56] Caching tarball of preloaded images
	I0805 04:47:31.902372   11083 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:47:31.902379   11083 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 04:47:31.902448   11083 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/old-k8s-version-207000/config.json ...
	I0805 04:47:31.902458   11083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/old-k8s-version-207000/config.json: {Name:mk379c0a1c811fb4b7ca60182c597b22d56fc412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:47:31.902905   11083 start.go:360] acquireMachinesLock for old-k8s-version-207000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:31.902937   11083 start.go:364] duration metric: took 26.5µs to acquireMachinesLock for "old-k8s-version-207000"
	I0805 04:47:31.902948   11083 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-207000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-207000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:47:31.902980   11083 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:47:31.911315   11083 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:47:31.926853   11083 start.go:159] libmachine.API.Create for "old-k8s-version-207000" (driver="qemu2")
	I0805 04:47:31.926880   11083 client.go:168] LocalClient.Create starting
	I0805 04:47:31.926944   11083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:47:31.926977   11083 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:31.926985   11083 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:31.927058   11083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:47:31.927091   11083 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:31.927102   11083 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:31.927454   11083 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:47:32.073955   11083 main.go:141] libmachine: Creating SSH key...
	I0805 04:47:32.163807   11083 main.go:141] libmachine: Creating Disk image...
	I0805 04:47:32.163812   11083 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:47:32.164001   11083 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2
	I0805 04:47:32.173605   11083 main.go:141] libmachine: STDOUT: 
	I0805 04:47:32.173631   11083 main.go:141] libmachine: STDERR: 
	I0805 04:47:32.173702   11083 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2 +20000M
	I0805 04:47:32.181875   11083 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:47:32.181889   11083 main.go:141] libmachine: STDERR: 
	I0805 04:47:32.181904   11083 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2
	I0805 04:47:32.181908   11083 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:47:32.181924   11083 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:32.181947   11083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:70:54:06:81:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2
	I0805 04:47:32.183630   11083 main.go:141] libmachine: STDOUT: 
	I0805 04:47:32.183644   11083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:32.183663   11083 client.go:171] duration metric: took 256.777167ms to LocalClient.Create
	I0805 04:47:34.185907   11083 start.go:128] duration metric: took 2.282874666s to createHost
	I0805 04:47:34.186053   11083 start.go:83] releasing machines lock for "old-k8s-version-207000", held for 2.283062875s
	W0805 04:47:34.186138   11083 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:34.199185   11083 out.go:177] * Deleting "old-k8s-version-207000" in qemu2 ...
	W0805 04:47:34.226878   11083 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:34.226913   11083 start.go:729] Will try again in 5 seconds ...
	I0805 04:47:39.229218   11083 start.go:360] acquireMachinesLock for old-k8s-version-207000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:39.229827   11083 start.go:364] duration metric: took 484.625µs to acquireMachinesLock for "old-k8s-version-207000"
	I0805 04:47:39.229900   11083 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-207000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-207000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:47:39.230215   11083 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:47:39.235116   11083 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:47:39.282550   11083 start.go:159] libmachine.API.Create for "old-k8s-version-207000" (driver="qemu2")
	I0805 04:47:39.282597   11083 client.go:168] LocalClient.Create starting
	I0805 04:47:39.282708   11083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:47:39.282776   11083 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:39.282792   11083 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:39.282857   11083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:47:39.282901   11083 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:39.282915   11083 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:39.283467   11083 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:47:39.443048   11083 main.go:141] libmachine: Creating SSH key...
	I0805 04:47:39.507261   11083 main.go:141] libmachine: Creating Disk image...
	I0805 04:47:39.507267   11083 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:47:39.507461   11083 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2
	I0805 04:47:39.517539   11083 main.go:141] libmachine: STDOUT: 
	I0805 04:47:39.517563   11083 main.go:141] libmachine: STDERR: 
	I0805 04:47:39.517643   11083 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2 +20000M
	I0805 04:47:39.526131   11083 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:47:39.526146   11083 main.go:141] libmachine: STDERR: 
	I0805 04:47:39.526159   11083 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2
	I0805 04:47:39.526163   11083 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:47:39.526172   11083 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:39.526195   11083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:42:c1:68:c0:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2
	I0805 04:47:39.527902   11083 main.go:141] libmachine: STDOUT: 
	I0805 04:47:39.527926   11083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:39.527940   11083 client.go:171] duration metric: took 245.334708ms to LocalClient.Create
	I0805 04:47:41.530124   11083 start.go:128] duration metric: took 2.299857833s to createHost
	I0805 04:47:41.530220   11083 start.go:83] releasing machines lock for "old-k8s-version-207000", held for 2.300348333s
	W0805 04:47:41.530493   11083 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-207000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-207000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:41.539929   11083 out.go:177] 
	W0805 04:47:41.545144   11083 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:47:41.545177   11083 out.go:239] * 
	* 
	W0805 04:47:41.546796   11083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:47:41.557080   11083 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-207000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (63.325833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-207000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-207000 create -f testdata/busybox.yaml: exit status 1 (30.265583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-207000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-207000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (30.6315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-207000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (29.014416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-207000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-207000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-207000 describe deploy/metrics-server -n kube-system: exit status 1 (27.284292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-207000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-207000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (29.571208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-207000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-207000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.179878s)

                                                
                                                
-- stdout --
	* [old-k8s-version-207000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-207000" primary control-plane node in "old-k8s-version-207000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-207000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-207000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:47:45.225158   11141 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:47:45.225281   11141 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:45.225284   11141 out.go:304] Setting ErrFile to fd 2...
	I0805 04:47:45.225286   11141 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:45.225419   11141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:47:45.226486   11141 out.go:298] Setting JSON to false
	I0805 04:47:45.242901   11141 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6435,"bootTime":1722852030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:47:45.242969   11141 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:47:45.247944   11141 out.go:177] * [old-k8s-version-207000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:47:45.255889   11141 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:47:45.255940   11141 notify.go:220] Checking for updates...
	I0805 04:47:45.265125   11141 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:47:45.267831   11141 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:47:45.270907   11141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:47:45.273831   11141 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:47:45.276855   11141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:47:45.280147   11141 config.go:182] Loaded profile config "old-k8s-version-207000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0805 04:47:45.283858   11141 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 04:47:45.286887   11141 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:47:45.290875   11141 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:47:45.297880   11141 start.go:297] selected driver: qemu2
	I0805 04:47:45.297888   11141 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-207000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-207000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:47:45.297937   11141 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:47:45.300409   11141 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:47:45.300433   11141 cni.go:84] Creating CNI manager for ""
	I0805 04:47:45.300440   11141 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 04:47:45.300468   11141 start.go:340] cluster config:
	{Name:old-k8s-version-207000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-207000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:47:45.304038   11141 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:45.311821   11141 out.go:177] * Starting "old-k8s-version-207000" primary control-plane node in "old-k8s-version-207000" cluster
	I0805 04:47:45.315875   11141 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 04:47:45.315888   11141 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 04:47:45.315897   11141 cache.go:56] Caching tarball of preloaded images
	I0805 04:47:45.315965   11141 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:47:45.315975   11141 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 04:47:45.316022   11141 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/old-k8s-version-207000/config.json ...
	I0805 04:47:45.316527   11141 start.go:360] acquireMachinesLock for old-k8s-version-207000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:45.316563   11141 start.go:364] duration metric: took 29.416µs to acquireMachinesLock for "old-k8s-version-207000"
	I0805 04:47:45.316572   11141 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:47:45.316576   11141 fix.go:54] fixHost starting: 
	I0805 04:47:45.316687   11141 fix.go:112] recreateIfNeeded on old-k8s-version-207000: state=Stopped err=<nil>
	W0805 04:47:45.316696   11141 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:47:45.319821   11141 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-207000" ...
	I0805 04:47:45.327908   11141 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:45.327953   11141 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:42:c1:68:c0:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2
	I0805 04:47:45.329805   11141 main.go:141] libmachine: STDOUT: 
	I0805 04:47:45.329828   11141 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:45.329855   11141 fix.go:56] duration metric: took 13.279625ms for fixHost
	I0805 04:47:45.329858   11141 start.go:83] releasing machines lock for "old-k8s-version-207000", held for 13.290875ms
	W0805 04:47:45.329865   11141 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:47:45.329900   11141 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:45.329904   11141 start.go:729] Will try again in 5 seconds ...
	I0805 04:47:50.332035   11141 start.go:360] acquireMachinesLock for old-k8s-version-207000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:50.332223   11141 start.go:364] duration metric: took 135.792µs to acquireMachinesLock for "old-k8s-version-207000"
	I0805 04:47:50.332251   11141 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:47:50.332257   11141 fix.go:54] fixHost starting: 
	I0805 04:47:50.332519   11141 fix.go:112] recreateIfNeeded on old-k8s-version-207000: state=Stopped err=<nil>
	W0805 04:47:50.332528   11141 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:47:50.340742   11141 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-207000" ...
	I0805 04:47:50.344774   11141 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:50.344919   11141 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:42:c1:68:c0:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/old-k8s-version-207000/disk.qcow2
	I0805 04:47:50.349123   11141 main.go:141] libmachine: STDOUT: 
	I0805 04:47:50.349157   11141 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:50.349191   11141 fix.go:56] duration metric: took 16.932542ms for fixHost
	I0805 04:47:50.349199   11141 start.go:83] releasing machines lock for "old-k8s-version-207000", held for 16.965792ms
	W0805 04:47:50.349281   11141 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-207000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-207000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:50.355749   11141 out.go:177] 
	W0805 04:47:50.359822   11141 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:47:50.359832   11141 out.go:239] * 
	* 
	W0805 04:47:50.360864   11141 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:47:50.368788   11141 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-207000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (50.058417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-207000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (30.885375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-207000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-207000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-207000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.676625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-207000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-207000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (29.134333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-207000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (28.496333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-207000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-207000 --alsologtostderr -v=1: exit status 83 (41.8295ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-207000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-207000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:47:50.614359   11160 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:47:50.615639   11160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:50.615644   11160 out.go:304] Setting ErrFile to fd 2...
	I0805 04:47:50.615646   11160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:50.615812   11160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:47:50.616037   11160 out.go:298] Setting JSON to false
	I0805 04:47:50.616043   11160 mustload.go:65] Loading cluster: old-k8s-version-207000
	I0805 04:47:50.616242   11160 config.go:182] Loaded profile config "old-k8s-version-207000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0805 04:47:50.620592   11160 out.go:177] * The control-plane node old-k8s-version-207000 host is not running: state=Stopped
	I0805 04:47:50.624504   11160 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-207000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-207000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (29.457084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-207000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (29.120917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.777176458s)

                                                
                                                
-- stdout --
	* [no-preload-049000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-049000" primary control-plane node in "no-preload-049000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-049000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:47:50.929665   11177 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:47:50.929795   11177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:50.929800   11177 out.go:304] Setting ErrFile to fd 2...
	I0805 04:47:50.929802   11177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:47:50.929928   11177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:47:50.931107   11177 out.go:298] Setting JSON to false
	I0805 04:47:50.947917   11177 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6440,"bootTime":1722852030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:47:50.947992   11177 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:47:50.951085   11177 out.go:177] * [no-preload-049000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:47:50.956077   11177 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:47:50.956130   11177 notify.go:220] Checking for updates...
	I0805 04:47:50.963941   11177 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:47:50.967961   11177 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:47:50.971019   11177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:47:50.974015   11177 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:47:50.977009   11177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:47:50.980234   11177 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:47:50.980292   11177 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:47:50.980339   11177 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:47:50.983936   11177 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:47:50.991020   11177 start.go:297] selected driver: qemu2
	I0805 04:47:50.991029   11177 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:47:50.991037   11177 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:47:50.993242   11177 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:47:50.996951   11177 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:47:51.000113   11177 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:47:51.000129   11177 cni.go:84] Creating CNI manager for ""
	I0805 04:47:51.000135   11177 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:47:51.000142   11177 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:47:51.000176   11177 start.go:340] cluster config:
	{Name:no-preload-049000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-049000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:47:51.003712   11177 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:51.010911   11177 out.go:177] * Starting "no-preload-049000" primary control-plane node in "no-preload-049000" cluster
	I0805 04:47:51.014958   11177 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 04:47:51.015037   11177 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/no-preload-049000/config.json ...
	I0805 04:47:51.015036   11177 cache.go:107] acquiring lock: {Name:mk0a7819add7465fad2fd0a86cd140be57dd6847 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:51.015052   11177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/no-preload-049000/config.json: {Name:mkb5acb21c8caaca44e104ec262da70e4a3d30f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:47:51.015044   11177 cache.go:107] acquiring lock: {Name:mk26a4ffa213c9bc5d6aceb63f9f49178638d1c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:51.015065   11177 cache.go:107] acquiring lock: {Name:mk0d592030d0daa96fdd2fe53099ca9e5851ce1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:51.015133   11177 cache.go:107] acquiring lock: {Name:mkbada51e6ab5d4a867b1c79092e40ba02f3684b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:51.015205   11177 cache.go:107] acquiring lock: {Name:mk2bcb2282dea989b92e823eeb8b0974570d0dc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:51.015280   11177 cache.go:107] acquiring lock: {Name:mk900607b4bae6d4fde5a7c4c3d421c37c41f9f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:51.015304   11177 start.go:360] acquireMachinesLock for no-preload-049000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:51.015301   11177 cache.go:115] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0805 04:47:51.015338   11177 start.go:364] duration metric: took 28µs to acquireMachinesLock for "no-preload-049000"
	I0805 04:47:51.015328   11177 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 280.041µs
	I0805 04:47:51.015329   11177 cache.go:107] acquiring lock: {Name:mk5d4a2a33b18622a4b4233914932ff66a518017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:51.015347   11177 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 04:47:51.015196   11177 cache.go:107] acquiring lock: {Name:mk295f64db8695df5ab237a1fa1461cfd5c1514e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:47:51.015349   11177 start.go:93] Provisioning new machine with config: &{Name:no-preload-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-049000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:47:51.015398   11177 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:47:51.015333   11177 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 04:47:51.015336   11177 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0805 04:47:51.015569   11177 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 04:47:51.015388   11177 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0805 04:47:51.015581   11177 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0805 04:47:51.015883   11177 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0805 04:47:51.019356   11177 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 04:47:51.022018   11177 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:47:51.025200   11177 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0805 04:47:51.025209   11177 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 04:47:51.025274   11177 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0805 04:47:51.025299   11177 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 04:47:51.025377   11177 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0805 04:47:51.025504   11177 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 04:47:51.025487   11177 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 04:47:51.038414   11177 start.go:159] libmachine.API.Create for "no-preload-049000" (driver="qemu2")
	I0805 04:47:51.038455   11177 client.go:168] LocalClient.Create starting
	I0805 04:47:51.038562   11177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:47:51.038596   11177 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:51.038609   11177 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:51.038647   11177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:47:51.038670   11177 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:51.038678   11177 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:51.039046   11177 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:47:51.192556   11177 main.go:141] libmachine: Creating SSH key...
	I0805 04:47:51.253920   11177 main.go:141] libmachine: Creating Disk image...
	I0805 04:47:51.253938   11177 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:47:51.254150   11177 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2
	I0805 04:47:51.264122   11177 main.go:141] libmachine: STDOUT: 
	I0805 04:47:51.264146   11177 main.go:141] libmachine: STDERR: 
	I0805 04:47:51.264196   11177 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2 +20000M
	I0805 04:47:51.273135   11177 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:47:51.273164   11177 main.go:141] libmachine: STDERR: 
	I0805 04:47:51.273176   11177 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2
	I0805 04:47:51.273181   11177 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:47:51.273193   11177 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:51.273221   11177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:02:1c:6b:ca:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2
	I0805 04:47:51.275199   11177 main.go:141] libmachine: STDOUT: 
	I0805 04:47:51.275220   11177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:51.275240   11177 client.go:171] duration metric: took 236.771542ms to LocalClient.Create
	I0805 04:47:51.416279   11177 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0805 04:47:51.418979   11177 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0805 04:47:51.424214   11177 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0805 04:47:51.437795   11177 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0805 04:47:51.468663   11177 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0805 04:47:51.470167   11177 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0805 04:47:51.514294   11177 cache.go:162] opening:  /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0805 04:47:51.636233   11177 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0805 04:47:51.636255   11177 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 621.042333ms
	I0805 04:47:51.636266   11177 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0805 04:47:53.275387   11177 start.go:128] duration metric: took 2.259952125s to createHost
	I0805 04:47:53.275420   11177 start.go:83] releasing machines lock for "no-preload-049000", held for 2.260055125s
	W0805 04:47:53.275460   11177 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:53.279641   11177 out.go:177] * Deleting "no-preload-049000" in qemu2 ...
	W0805 04:47:53.295510   11177 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:47:53.295522   11177 start.go:729] Will try again in 5 seconds ...
	I0805 04:47:54.119041   11177 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0805 04:47:54.119068   11177 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.103788958s
	I0805 04:47:54.119080   11177 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0805 04:47:54.250976   11177 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0805 04:47:54.251007   11177 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 3.235939791s
	I0805 04:47:54.251039   11177 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0805 04:47:54.688229   11177 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0805 04:47:54.688243   11177 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 3.673056083s
	I0805 04:47:54.688250   11177 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0805 04:47:54.881221   11177 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0805 04:47:54.881239   11177 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 3.866075458s
	I0805 04:47:54.881247   11177 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0805 04:47:55.468395   11177 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0805 04:47:55.468412   11177 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 4.453304083s
	I0805 04:47:55.468421   11177 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0805 04:47:58.296143   11177 start.go:360] acquireMachinesLock for no-preload-049000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:47:58.296695   11177 start.go:364] duration metric: took 457.875µs to acquireMachinesLock for "no-preload-049000"
	I0805 04:47:58.296842   11177 start.go:93] Provisioning new machine with config: &{Name:no-preload-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-049000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:47:58.297093   11177 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:47:58.306588   11177 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:47:58.355809   11177 start.go:159] libmachine.API.Create for "no-preload-049000" (driver="qemu2")
	I0805 04:47:58.355860   11177 client.go:168] LocalClient.Create starting
	I0805 04:47:58.355973   11177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:47:58.356045   11177 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:58.356064   11177 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:58.356136   11177 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:47:58.356185   11177 main.go:141] libmachine: Decoding PEM data...
	I0805 04:47:58.356202   11177 main.go:141] libmachine: Parsing certificate...
	I0805 04:47:58.356732   11177 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:47:58.512730   11177 main.go:141] libmachine: Creating SSH key...
	I0805 04:47:58.619291   11177 main.go:141] libmachine: Creating Disk image...
	I0805 04:47:58.619298   11177 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:47:58.619496   11177 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2
	I0805 04:47:58.629155   11177 main.go:141] libmachine: STDOUT: 
	I0805 04:47:58.629176   11177 main.go:141] libmachine: STDERR: 
	I0805 04:47:58.629232   11177 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2 +20000M
	I0805 04:47:58.637442   11177 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:47:58.637458   11177 main.go:141] libmachine: STDERR: 
	I0805 04:47:58.637478   11177 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2
	I0805 04:47:58.637483   11177 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:47:58.637497   11177 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:47:58.637533   11177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:88:a8:c8:6c:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2
	I0805 04:47:58.639448   11177 main.go:141] libmachine: STDOUT: 
	I0805 04:47:58.639462   11177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:47:58.639475   11177 client.go:171] duration metric: took 283.608208ms to LocalClient.Create
	I0805 04:47:59.100579   11177 cache.go:157] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0805 04:47:59.100595   11177 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 8.085253166s
	I0805 04:47:59.100602   11177 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0805 04:47:59.100626   11177 cache.go:87] Successfully saved all images to host disk.
	I0805 04:48:00.642000   11177 start.go:128] duration metric: took 2.344805875s to createHost
	I0805 04:48:00.642094   11177 start.go:83] releasing machines lock for "no-preload-049000", held for 2.345351792s
	W0805 04:48:00.642400   11177 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-049000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-049000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:00.651976   11177 out.go:177] 
	W0805 04:48:00.656050   11177 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:00.656069   11177 out.go:239] * 
	* 
	W0805 04:48:00.657850   11177 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:48:00.666056   11177 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (53.903916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-049000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-049000 create -f testdata/busybox.yaml: exit status 1 (29.613541ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-049000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-049000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (29.066833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (29.645667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-049000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-049000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-049000 describe deploy/metrics-server -n kube-system: exit status 1 (27.426958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-049000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-049000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (28.429333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.174355542s)

                                                
                                                
-- stdout --
	* [no-preload-049000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-049000" primary control-plane node in "no-preload-049000" cluster
	* Restarting existing qemu2 VM for "no-preload-049000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-049000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:02.834719   11251 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:02.834866   11251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:02.834869   11251 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:02.834871   11251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:02.835014   11251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:02.836057   11251 out.go:298] Setting JSON to false
	I0805 04:48:02.852442   11251 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6452,"bootTime":1722852030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:48:02.852522   11251 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:48:02.857726   11251 out.go:177] * [no-preload-049000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:48:02.864752   11251 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:48:02.864814   11251 notify.go:220] Checking for updates...
	I0805 04:48:02.871671   11251 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:48:02.874714   11251 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:48:02.877731   11251 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:48:02.880758   11251 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:48:02.883723   11251 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:48:02.886905   11251 config.go:182] Loaded profile config "no-preload-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 04:48:02.887170   11251 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:48:02.891695   11251 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:48:02.897725   11251 start.go:297] selected driver: qemu2
	I0805 04:48:02.897732   11251 start.go:901] validating driver "qemu2" against &{Name:no-preload-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-049000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:02.897793   11251 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:48:02.900378   11251 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:48:02.900415   11251 cni.go:84] Creating CNI manager for ""
	I0805 04:48:02.900424   11251 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:48:02.900444   11251 start.go:340] cluster config:
	{Name:no-preload-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-049000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:02.904107   11251 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:02.911691   11251 out.go:177] * Starting "no-preload-049000" primary control-plane node in "no-preload-049000" cluster
	I0805 04:48:02.915772   11251 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 04:48:02.915853   11251 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/no-preload-049000/config.json ...
	I0805 04:48:02.915891   11251 cache.go:107] acquiring lock: {Name:mk0a7819add7465fad2fd0a86cd140be57dd6847 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:02.915902   11251 cache.go:107] acquiring lock: {Name:mk295f64db8695df5ab237a1fa1461cfd5c1514e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:02.915923   11251 cache.go:107] acquiring lock: {Name:mk0d592030d0daa96fdd2fe53099ca9e5851ce1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:02.915954   11251 cache.go:115] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0805 04:48:02.915960   11251 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.875µs
	I0805 04:48:02.915963   11251 cache.go:115] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0805 04:48:02.915972   11251 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0805 04:48:02.915972   11251 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 83.125µs
	I0805 04:48:02.915989   11251 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0805 04:48:02.915992   11251 cache.go:115] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0805 04:48:02.916000   11251 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 91.166µs
	I0805 04:48:02.916004   11251 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0805 04:48:02.915998   11251 cache.go:107] acquiring lock: {Name:mk5d4a2a33b18622a4b4233914932ff66a518017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:02.916012   11251 cache.go:107] acquiring lock: {Name:mk26a4ffa213c9bc5d6aceb63f9f49178638d1c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:02.915981   11251 cache.go:107] acquiring lock: {Name:mkbada51e6ab5d4a867b1c79092e40ba02f3684b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:02.916038   11251 cache.go:115] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0805 04:48:02.916041   11251 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 44.5µs
	I0805 04:48:02.916044   11251 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0805 04:48:02.916048   11251 cache.go:115] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0805 04:48:02.916005   11251 cache.go:107] acquiring lock: {Name:mk900607b4bae6d4fde5a7c4c3d421c37c41f9f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:02.916067   11251 cache.go:115] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0805 04:48:02.916075   11251 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 94.542µs
	I0805 04:48:02.916078   11251 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0805 04:48:02.916078   11251 cache.go:107] acquiring lock: {Name:mk2bcb2282dea989b92e823eeb8b0974570d0dc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:02.916051   11251 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 39.75µs
	I0805 04:48:02.916092   11251 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0805 04:48:02.916123   11251 cache.go:115] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0805 04:48:02.916127   11251 cache.go:115] /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0805 04:48:02.916128   11251 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 122.792µs
	I0805 04:48:02.916134   11251 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0805 04:48:02.916131   11251 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 58.583µs
	I0805 04:48:02.916140   11251 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0805 04:48:02.916144   11251 cache.go:87] Successfully saved all images to host disk.
	I0805 04:48:02.916260   11251 start.go:360] acquireMachinesLock for no-preload-049000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:02.916286   11251 start.go:364] duration metric: took 20.625µs to acquireMachinesLock for "no-preload-049000"
	I0805 04:48:02.916294   11251 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:48:02.916300   11251 fix.go:54] fixHost starting: 
	I0805 04:48:02.916405   11251 fix.go:112] recreateIfNeeded on no-preload-049000: state=Stopped err=<nil>
	W0805 04:48:02.916413   11251 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:48:02.924686   11251 out.go:177] * Restarting existing qemu2 VM for "no-preload-049000" ...
	I0805 04:48:02.928677   11251 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:02.928713   11251 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:88:a8:c8:6c:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2
	I0805 04:48:02.930649   11251 main.go:141] libmachine: STDOUT: 
	I0805 04:48:02.930667   11251 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:02.930693   11251 fix.go:56] duration metric: took 14.3935ms for fixHost
	I0805 04:48:02.930698   11251 start.go:83] releasing machines lock for "no-preload-049000", held for 14.407917ms
	W0805 04:48:02.930704   11251 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:02.930737   11251 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:02.930741   11251 start.go:729] Will try again in 5 seconds ...
	I0805 04:48:07.932266   11251 start.go:360] acquireMachinesLock for no-preload-049000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:07.932511   11251 start.go:364] duration metric: took 198.666µs to acquireMachinesLock for "no-preload-049000"
	I0805 04:48:07.932582   11251 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:48:07.932598   11251 fix.go:54] fixHost starting: 
	I0805 04:48:07.932973   11251 fix.go:112] recreateIfNeeded on no-preload-049000: state=Stopped err=<nil>
	W0805 04:48:07.932986   11251 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:48:07.937670   11251 out.go:177] * Restarting existing qemu2 VM for "no-preload-049000" ...
	I0805 04:48:07.944614   11251 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:07.944740   11251 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:88:a8:c8:6c:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/no-preload-049000/disk.qcow2
	I0805 04:48:07.949924   11251 main.go:141] libmachine: STDOUT: 
	I0805 04:48:07.949960   11251 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:07.949995   11251 fix.go:56] duration metric: took 17.397417ms for fixHost
	I0805 04:48:07.950006   11251 start.go:83] releasing machines lock for "no-preload-049000", held for 17.480583ms
	W0805 04:48:07.950109   11251 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-049000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-049000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:07.957625   11251 out.go:177] 
	W0805 04:48:07.960668   11251 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:07.960690   11251 out.go:239] * 
	* 
	W0805 04:48:07.962002   11251 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:48:07.975663   11251 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (53.411292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-049000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (31.178667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-049000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-049000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-049000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.770958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-049000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-049000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (29.333125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-049000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (30.587333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-049000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-049000 --alsologtostderr -v=1: exit status 83 (41.152542ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-049000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-049000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:08.226782   11270 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:08.226965   11270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:08.226968   11270 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:08.226974   11270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:08.227107   11270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:08.227343   11270 out.go:298] Setting JSON to false
	I0805 04:48:08.227349   11270 mustload.go:65] Loading cluster: no-preload-049000
	I0805 04:48:08.227516   11270 config.go:182] Loaded profile config "no-preload-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 04:48:08.232441   11270 out.go:177] * The control-plane node no-preload-049000 host is not running: state=Stopped
	I0805 04:48:08.236381   11270 out.go:177]   To start a cluster, run: "minikube start -p no-preload-049000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-049000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (28.225917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (28.292917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-407000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-407000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (11.831773041s)

                                                
                                                
-- stdout --
	* [embed-certs-407000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-407000" primary control-plane node in "embed-certs-407000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-407000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:08.541223   11287 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:08.541355   11287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:08.541358   11287 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:08.541360   11287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:08.541495   11287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:08.542568   11287 out.go:298] Setting JSON to false
	I0805 04:48:08.559201   11287 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6458,"bootTime":1722852030,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:48:08.559276   11287 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:48:08.564630   11287 out.go:177] * [embed-certs-407000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:48:08.570650   11287 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:48:08.570730   11287 notify.go:220] Checking for updates...
	I0805 04:48:08.576519   11287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:48:08.579551   11287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:48:08.582598   11287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:48:08.584121   11287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:48:08.587539   11287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:48:08.590870   11287 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:48:08.590927   11287 config.go:182] Loaded profile config "stopped-upgrade-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 04:48:08.590985   11287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:48:08.595349   11287 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:48:08.602560   11287 start.go:297] selected driver: qemu2
	I0805 04:48:08.602565   11287 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:48:08.602571   11287 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:48:08.604923   11287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:48:08.608532   11287 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:48:08.611617   11287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:48:08.611646   11287 cni.go:84] Creating CNI manager for ""
	I0805 04:48:08.611652   11287 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:48:08.611657   11287 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:48:08.611679   11287 start.go:340] cluster config:
	{Name:embed-certs-407000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:08.615240   11287 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:08.618570   11287 out.go:177] * Starting "embed-certs-407000" primary control-plane node in "embed-certs-407000" cluster
	I0805 04:48:08.626539   11287 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:48:08.626556   11287 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:48:08.626568   11287 cache.go:56] Caching tarball of preloaded images
	I0805 04:48:08.626631   11287 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:48:08.626638   11287 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:48:08.626697   11287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/embed-certs-407000/config.json ...
	I0805 04:48:08.626708   11287 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/embed-certs-407000/config.json: {Name:mkc9ad610d93be4728d2d09462c5e60e36eef011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:48:08.626920   11287 start.go:360] acquireMachinesLock for embed-certs-407000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:08.626949   11287 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "embed-certs-407000"
	I0805 04:48:08.626959   11287 start.go:93] Provisioning new machine with config: &{Name:embed-certs-407000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:48:08.626995   11287 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:48:08.635555   11287 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:48:08.650818   11287 start.go:159] libmachine.API.Create for "embed-certs-407000" (driver="qemu2")
	I0805 04:48:08.650842   11287 client.go:168] LocalClient.Create starting
	I0805 04:48:08.650899   11287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:48:08.650931   11287 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:08.650941   11287 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:08.650981   11287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:48:08.651003   11287 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:08.651018   11287 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:08.651450   11287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:48:08.798738   11287 main.go:141] libmachine: Creating SSH key...
	I0805 04:48:08.851985   11287 main.go:141] libmachine: Creating Disk image...
	I0805 04:48:08.851990   11287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:48:08.852163   11287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2
	I0805 04:48:08.861325   11287 main.go:141] libmachine: STDOUT: 
	I0805 04:48:08.861345   11287 main.go:141] libmachine: STDERR: 
	I0805 04:48:08.861396   11287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2 +20000M
	I0805 04:48:08.869211   11287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:48:08.869224   11287 main.go:141] libmachine: STDERR: 
	I0805 04:48:08.869243   11287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2
	I0805 04:48:08.869248   11287 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:48:08.869260   11287 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:08.869293   11287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:48:0f:ed:e3:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2
	I0805 04:48:08.870828   11287 main.go:141] libmachine: STDOUT: 
	I0805 04:48:08.870842   11287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:08.870860   11287 client.go:171] duration metric: took 220.012708ms to LocalClient.Create
	I0805 04:48:10.873092   11287 start.go:128] duration metric: took 2.246041792s to createHost
	I0805 04:48:10.873168   11287 start.go:83] releasing machines lock for "embed-certs-407000", held for 2.246188167s
	W0805 04:48:10.873269   11287 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:10.880615   11287 out.go:177] * Deleting "embed-certs-407000" in qemu2 ...
	W0805 04:48:10.906593   11287 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:10.906629   11287 start.go:729] Will try again in 5 seconds ...
	I0805 04:48:15.908755   11287 start.go:360] acquireMachinesLock for embed-certs-407000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:17.926955   11287 start.go:364] duration metric: took 2.018113875s to acquireMachinesLock for "embed-certs-407000"
	I0805 04:48:17.927108   11287 start.go:93] Provisioning new machine with config: &{Name:embed-certs-407000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:48:17.927388   11287 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:48:17.935867   11287 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:48:17.985481   11287 start.go:159] libmachine.API.Create for "embed-certs-407000" (driver="qemu2")
	I0805 04:48:17.985525   11287 client.go:168] LocalClient.Create starting
	I0805 04:48:17.985644   11287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:48:17.985703   11287 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:17.985720   11287 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:17.985782   11287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:48:17.985824   11287 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:17.985835   11287 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:17.986435   11287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:48:18.153863   11287 main.go:141] libmachine: Creating SSH key...
	I0805 04:48:18.268213   11287 main.go:141] libmachine: Creating Disk image...
	I0805 04:48:18.268218   11287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:48:18.268404   11287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2
	I0805 04:48:18.277852   11287 main.go:141] libmachine: STDOUT: 
	I0805 04:48:18.277869   11287 main.go:141] libmachine: STDERR: 
	I0805 04:48:18.277925   11287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2 +20000M
	I0805 04:48:18.285887   11287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:48:18.285902   11287 main.go:141] libmachine: STDERR: 
	I0805 04:48:18.285911   11287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2
	I0805 04:48:18.285914   11287 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:48:18.285925   11287 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:18.285963   11287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:15:27:2e:07:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2
	I0805 04:48:18.287653   11287 main.go:141] libmachine: STDOUT: 
	I0805 04:48:18.287668   11287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:18.287679   11287 client.go:171] duration metric: took 302.144959ms to LocalClient.Create
	I0805 04:48:20.290041   11287 start.go:128] duration metric: took 2.36255775s to createHost
	I0805 04:48:20.290162   11287 start.go:83] releasing machines lock for "embed-certs-407000", held for 2.363110709s
	W0805 04:48:20.290479   11287 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:20.307099   11287 out.go:177] 
	W0805 04:48:20.317108   11287 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:20.317134   11287 out.go:239] * 
	* 
	W0805 04:48:20.319766   11287 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:48:20.330000   11287 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-407000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (65.076667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-780000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-780000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.997343875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-780000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-780000" primary control-plane node in "default-k8s-diff-port-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:15.484414   11307 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:15.484532   11307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:15.484535   11307 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:15.484538   11307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:15.484654   11307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:15.485722   11307 out.go:298] Setting JSON to false
	I0805 04:48:15.501930   11307 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6465,"bootTime":1722852030,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:48:15.502007   11307 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:48:15.507041   11307 out.go:177] * [default-k8s-diff-port-780000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:48:15.516054   11307 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:48:15.516112   11307 notify.go:220] Checking for updates...
	I0805 04:48:15.523002   11307 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:48:15.526012   11307 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:48:15.528989   11307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:48:15.531967   11307 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:48:15.535064   11307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:48:15.538288   11307 config.go:182] Loaded profile config "embed-certs-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:48:15.538349   11307 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:48:15.538397   11307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:48:15.541933   11307 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:48:15.547906   11307 start.go:297] selected driver: qemu2
	I0805 04:48:15.547914   11307 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:48:15.547922   11307 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:48:15.550233   11307 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:48:15.552989   11307 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:48:15.556016   11307 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:48:15.556043   11307 cni.go:84] Creating CNI manager for ""
	I0805 04:48:15.556050   11307 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:48:15.556054   11307 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:48:15.556084   11307 start.go:340] cluster config:
	{Name:default-k8s-diff-port-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:15.559683   11307 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:15.567010   11307 out.go:177] * Starting "default-k8s-diff-port-780000" primary control-plane node in "default-k8s-diff-port-780000" cluster
	I0805 04:48:15.571020   11307 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:48:15.571034   11307 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:48:15.571049   11307 cache.go:56] Caching tarball of preloaded images
	I0805 04:48:15.571115   11307 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:48:15.571121   11307 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:48:15.571214   11307 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/default-k8s-diff-port-780000/config.json ...
	I0805 04:48:15.571225   11307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/default-k8s-diff-port-780000/config.json: {Name:mkb0ed90efc9196575243b84247c581993c52039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:48:15.571655   11307 start.go:360] acquireMachinesLock for default-k8s-diff-port-780000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:15.571690   11307 start.go:364] duration metric: took 29.042µs to acquireMachinesLock for "default-k8s-diff-port-780000"
	I0805 04:48:15.571701   11307 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:48:15.571740   11307 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:48:15.580029   11307 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:48:15.597018   11307 start.go:159] libmachine.API.Create for "default-k8s-diff-port-780000" (driver="qemu2")
	I0805 04:48:15.597045   11307 client.go:168] LocalClient.Create starting
	I0805 04:48:15.597099   11307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:48:15.597136   11307 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:15.597147   11307 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:15.597184   11307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:48:15.597207   11307 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:15.597213   11307 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:15.597628   11307 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:48:15.775386   11307 main.go:141] libmachine: Creating SSH key...
	I0805 04:48:15.905031   11307 main.go:141] libmachine: Creating Disk image...
	I0805 04:48:15.905037   11307 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:48:15.905215   11307 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2
	I0805 04:48:15.914594   11307 main.go:141] libmachine: STDOUT: 
	I0805 04:48:15.914611   11307 main.go:141] libmachine: STDERR: 
	I0805 04:48:15.914663   11307 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2 +20000M
	I0805 04:48:15.922502   11307 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:48:15.922519   11307 main.go:141] libmachine: STDERR: 
	I0805 04:48:15.922530   11307 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2
	I0805 04:48:15.922534   11307 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:48:15.922547   11307 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:15.922574   11307 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:54:c1:6d:dd:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2
	I0805 04:48:15.924317   11307 main.go:141] libmachine: STDOUT: 
	I0805 04:48:15.924333   11307 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:15.924352   11307 client.go:171] duration metric: took 327.297167ms to LocalClient.Create
	I0805 04:48:17.926642   11307 start.go:128] duration metric: took 2.354856083s to createHost
	I0805 04:48:17.926746   11307 start.go:83] releasing machines lock for "default-k8s-diff-port-780000", held for 2.355022584s
	W0805 04:48:17.926824   11307 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:17.947803   11307 out.go:177] * Deleting "default-k8s-diff-port-780000" in qemu2 ...
	W0805 04:48:17.967840   11307 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:17.967862   11307 start.go:729] Will try again in 5 seconds ...
	I0805 04:48:22.970069   11307 start.go:360] acquireMachinesLock for default-k8s-diff-port-780000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:22.970404   11307 start.go:364] duration metric: took 266.125µs to acquireMachinesLock for "default-k8s-diff-port-780000"
	I0805 04:48:22.970467   11307 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:48:22.970736   11307 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:48:22.980124   11307 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:48:23.029513   11307 start.go:159] libmachine.API.Create for "default-k8s-diff-port-780000" (driver="qemu2")
	I0805 04:48:23.029560   11307 client.go:168] LocalClient.Create starting
	I0805 04:48:23.029674   11307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:48:23.029727   11307 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:23.029743   11307 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:23.029803   11307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:48:23.029833   11307 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:23.029847   11307 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:23.030532   11307 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:48:23.193104   11307 main.go:141] libmachine: Creating SSH key...
	I0805 04:48:23.388461   11307 main.go:141] libmachine: Creating Disk image...
	I0805 04:48:23.388467   11307 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:48:23.388687   11307 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2
	I0805 04:48:23.398583   11307 main.go:141] libmachine: STDOUT: 
	I0805 04:48:23.398613   11307 main.go:141] libmachine: STDERR: 
	I0805 04:48:23.398668   11307 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2 +20000M
	I0805 04:48:23.406736   11307 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:48:23.406751   11307 main.go:141] libmachine: STDERR: 
	I0805 04:48:23.406762   11307 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2
	I0805 04:48:23.406775   11307 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:48:23.406785   11307 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:23.406819   11307 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:ed:c5:fd:b2:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2
	I0805 04:48:23.408525   11307 main.go:141] libmachine: STDOUT: 
	I0805 04:48:23.408539   11307 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:23.408558   11307 client.go:171] duration metric: took 378.982417ms to LocalClient.Create
	I0805 04:48:25.410762   11307 start.go:128] duration metric: took 2.4399695s to createHost
	I0805 04:48:25.410808   11307 start.go:83] releasing machines lock for "default-k8s-diff-port-780000", held for 2.440354917s
	W0805 04:48:25.411264   11307 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:25.424712   11307 out.go:177] 
	W0805 04:48:25.428964   11307 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:25.429018   11307 out.go:239] * 
	* 
	W0805 04:48:25.431692   11307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:48:25.439771   11307 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-780000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (65.698333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-407000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-407000 create -f testdata/busybox.yaml: exit status 1 (29.760333ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-407000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-407000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (28.155958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-407000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (27.879417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-407000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-407000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-407000 describe deploy/metrics-server -n kube-system: exit status 1 (26.415792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-407000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-407000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (28.229542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-407000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-407000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.190126291s)

                                                
                                                
-- stdout --
	* [embed-certs-407000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-407000" primary control-plane node in "embed-certs-407000" cluster
	* Restarting existing qemu2 VM for "embed-certs-407000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-407000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:22.679016   11359 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:22.679140   11359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:22.679143   11359 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:22.679145   11359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:22.679289   11359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:22.680310   11359 out.go:298] Setting JSON to false
	I0805 04:48:22.696327   11359 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6472,"bootTime":1722852030,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:48:22.696433   11359 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:48:22.700013   11359 out.go:177] * [embed-certs-407000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:48:22.706995   11359 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:48:22.707046   11359 notify.go:220] Checking for updates...
	I0805 04:48:22.713968   11359 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:48:22.716993   11359 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:48:22.719985   11359 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:48:22.728033   11359 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:48:22.730988   11359 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:48:22.734244   11359 config.go:182] Loaded profile config "embed-certs-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:48:22.734494   11359 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:48:22.738933   11359 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:48:22.745986   11359 start.go:297] selected driver: qemu2
	I0805 04:48:22.745991   11359 start.go:901] validating driver "qemu2" against &{Name:embed-certs-407000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:22.746065   11359 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:48:22.748578   11359 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:48:22.748605   11359 cni.go:84] Creating CNI manager for ""
	I0805 04:48:22.748613   11359 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:48:22.748659   11359 start.go:340] cluster config:
	{Name:embed-certs-407000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-407000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:22.752465   11359 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:22.759985   11359 out.go:177] * Starting "embed-certs-407000" primary control-plane node in "embed-certs-407000" cluster
	I0805 04:48:22.764012   11359 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:48:22.764029   11359 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:48:22.764049   11359 cache.go:56] Caching tarball of preloaded images
	I0805 04:48:22.764112   11359 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:48:22.764124   11359 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:48:22.764183   11359 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/embed-certs-407000/config.json ...
	I0805 04:48:22.764702   11359 start.go:360] acquireMachinesLock for embed-certs-407000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:22.764737   11359 start.go:364] duration metric: took 29.125µs to acquireMachinesLock for "embed-certs-407000"
	I0805 04:48:22.764745   11359 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:48:22.764751   11359 fix.go:54] fixHost starting: 
	I0805 04:48:22.764869   11359 fix.go:112] recreateIfNeeded on embed-certs-407000: state=Stopped err=<nil>
	W0805 04:48:22.764881   11359 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:48:22.771953   11359 out.go:177] * Restarting existing qemu2 VM for "embed-certs-407000" ...
	I0805 04:48:22.775995   11359 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:22.776045   11359 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:15:27:2e:07:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2
	I0805 04:48:22.778072   11359 main.go:141] libmachine: STDOUT: 
	I0805 04:48:22.778093   11359 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:22.778121   11359 fix.go:56] duration metric: took 13.370833ms for fixHost
	I0805 04:48:22.778125   11359 start.go:83] releasing machines lock for "embed-certs-407000", held for 13.383333ms
	W0805 04:48:22.778133   11359 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:22.778168   11359 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:22.778173   11359 start.go:729] Will try again in 5 seconds ...
	I0805 04:48:27.780496   11359 start.go:360] acquireMachinesLock for embed-certs-407000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:27.780956   11359 start.go:364] duration metric: took 343.333µs to acquireMachinesLock for "embed-certs-407000"
	I0805 04:48:27.781021   11359 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:48:27.781043   11359 fix.go:54] fixHost starting: 
	I0805 04:48:27.781757   11359 fix.go:112] recreateIfNeeded on embed-certs-407000: state=Stopped err=<nil>
	W0805 04:48:27.781784   11359 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:48:27.791367   11359 out.go:177] * Restarting existing qemu2 VM for "embed-certs-407000" ...
	I0805 04:48:27.794327   11359 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:27.794550   11359 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:15:27:2e:07:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/embed-certs-407000/disk.qcow2
	I0805 04:48:27.803908   11359 main.go:141] libmachine: STDOUT: 
	I0805 04:48:27.803967   11359 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:27.804062   11359 fix.go:56] duration metric: took 23.023291ms for fixHost
	I0805 04:48:27.804076   11359 start.go:83] releasing machines lock for "embed-certs-407000", held for 23.095709ms
	W0805 04:48:27.804298   11359 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-407000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-407000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:27.813335   11359 out.go:177] 
	W0805 04:48:27.817425   11359 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:27.817448   11359 out.go:239] * 
	* 
	W0805 04:48:27.819953   11359 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:48:27.828329   11359 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-407000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (67.437417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-780000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-780000 create -f testdata/busybox.yaml: exit status 1 (29.857375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-780000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-780000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (28.2575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-780000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (28.852417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-780000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-780000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-780000 describe deploy/metrics-server -n kube-system: exit status 1 (26.570541ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-780000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-780000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (28.674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-407000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (31.439708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-407000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-407000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-407000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.630625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-407000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-407000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (29.237834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-407000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (29.064875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-407000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-407000 --alsologtostderr -v=1: exit status 83 (40.324875ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-407000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-407000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:28.092240   11408 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:28.092400   11408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:28.092403   11408 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:28.092405   11408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:28.092533   11408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:28.092768   11408 out.go:298] Setting JSON to false
	I0805 04:48:28.092774   11408 mustload.go:65] Loading cluster: embed-certs-407000
	I0805 04:48:28.092959   11408 config.go:182] Loaded profile config "embed-certs-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:48:28.097421   11408 out.go:177] * The control-plane node embed-certs-407000 host is not running: state=Stopped
	I0805 04:48:28.101471   11408 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-407000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-407000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (28.108625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-407000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (29.260833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.942758541s)

                                                
                                                
-- stdout --
	* [newest-cni-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-332000" primary control-plane node in "newest-cni-332000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-332000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:28.405328   11427 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:28.405446   11427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:28.405450   11427 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:28.405452   11427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:28.405577   11427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:28.406684   11427 out.go:298] Setting JSON to false
	I0805 04:48:28.422781   11427 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6478,"bootTime":1722852030,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:48:28.422850   11427 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:48:28.427503   11427 out.go:177] * [newest-cni-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:48:28.433476   11427 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:48:28.433576   11427 notify.go:220] Checking for updates...
	I0805 04:48:28.440387   11427 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:48:28.443421   11427 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:48:28.446443   11427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:48:28.447872   11427 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:48:28.451432   11427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:48:28.454786   11427 config.go:182] Loaded profile config "default-k8s-diff-port-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:48:28.454846   11427 config.go:182] Loaded profile config "multinode-127000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:48:28.454895   11427 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:48:28.459237   11427 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 04:48:28.466478   11427 start.go:297] selected driver: qemu2
	I0805 04:48:28.466487   11427 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:48:28.466496   11427 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:48:28.468802   11427 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0805 04:48:28.468828   11427 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0805 04:48:28.477436   11427 out.go:177] * Automatically selected the socket_vmnet network
	I0805 04:48:28.480558   11427 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0805 04:48:28.480579   11427 cni.go:84] Creating CNI manager for ""
	I0805 04:48:28.480587   11427 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:48:28.480592   11427 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:48:28.480632   11427 start.go:340] cluster config:
	{Name:newest-cni-332000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:28.484436   11427 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:28.492406   11427 out.go:177] * Starting "newest-cni-332000" primary control-plane node in "newest-cni-332000" cluster
	I0805 04:48:28.496496   11427 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 04:48:28.496512   11427 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 04:48:28.496525   11427 cache.go:56] Caching tarball of preloaded images
	I0805 04:48:28.496589   11427 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:48:28.496595   11427 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 04:48:28.496666   11427 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/newest-cni-332000/config.json ...
	I0805 04:48:28.496683   11427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/newest-cni-332000/config.json: {Name:mkae22118b5b3c93a8d6216d407d3f6f914ade3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:48:28.497124   11427 start.go:360] acquireMachinesLock for newest-cni-332000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:28.497158   11427 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "newest-cni-332000"
	I0805 04:48:28.497169   11427 start.go:93] Provisioning new machine with config: &{Name:newest-cni-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:48:28.497198   11427 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:48:28.506451   11427 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:48:28.524743   11427 start.go:159] libmachine.API.Create for "newest-cni-332000" (driver="qemu2")
	I0805 04:48:28.524771   11427 client.go:168] LocalClient.Create starting
	I0805 04:48:28.524851   11427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:48:28.524881   11427 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:28.524891   11427 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:28.524935   11427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:48:28.524959   11427 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:28.524967   11427 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:28.525426   11427 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:48:28.674974   11427 main.go:141] libmachine: Creating SSH key...
	I0805 04:48:28.840084   11427 main.go:141] libmachine: Creating Disk image...
	I0805 04:48:28.840090   11427 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:48:28.840314   11427 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2
	I0805 04:48:28.849958   11427 main.go:141] libmachine: STDOUT: 
	I0805 04:48:28.849983   11427 main.go:141] libmachine: STDERR: 
	I0805 04:48:28.850030   11427 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2 +20000M
	I0805 04:48:28.858069   11427 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:48:28.858082   11427 main.go:141] libmachine: STDERR: 
	I0805 04:48:28.858093   11427 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2
	I0805 04:48:28.858098   11427 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:48:28.858118   11427 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:28.858149   11427 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e0:d6:2a:c3:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2
	I0805 04:48:28.859828   11427 main.go:141] libmachine: STDOUT: 
	I0805 04:48:28.859841   11427 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:28.859863   11427 client.go:171] duration metric: took 335.083208ms to LocalClient.Create
	I0805 04:48:30.862099   11427 start.go:128] duration metric: took 2.364854292s to createHost
	I0805 04:48:30.862175   11427 start.go:83] releasing machines lock for "newest-cni-332000", held for 2.364982875s
	W0805 04:48:30.862310   11427 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:30.875768   11427 out.go:177] * Deleting "newest-cni-332000" in qemu2 ...
	W0805 04:48:30.904472   11427 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:30.904502   11427 start.go:729] Will try again in 5 seconds ...
	I0805 04:48:35.906800   11427 start.go:360] acquireMachinesLock for newest-cni-332000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:35.918749   11427 start.go:364] duration metric: took 11.842875ms to acquireMachinesLock for "newest-cni-332000"
	I0805 04:48:35.918808   11427 start.go:93] Provisioning new machine with config: &{Name:newest-cni-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 04:48:35.919036   11427 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 04:48:35.925605   11427 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 04:48:35.974003   11427 start.go:159] libmachine.API.Create for "newest-cni-332000" (driver="qemu2")
	I0805 04:48:35.974050   11427 client.go:168] LocalClient.Create starting
	I0805 04:48:35.974179   11427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/ca.pem
	I0805 04:48:35.974245   11427 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:35.974258   11427 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:35.974315   11427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19377-7130/.minikube/certs/cert.pem
	I0805 04:48:35.974360   11427 main.go:141] libmachine: Decoding PEM data...
	I0805 04:48:35.974376   11427 main.go:141] libmachine: Parsing certificate...
	I0805 04:48:35.974909   11427 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 04:48:36.133926   11427 main.go:141] libmachine: Creating SSH key...
	I0805 04:48:36.258995   11427 main.go:141] libmachine: Creating Disk image...
	I0805 04:48:36.259004   11427 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 04:48:36.259219   11427 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2.raw /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2
	I0805 04:48:36.268965   11427 main.go:141] libmachine: STDOUT: 
	I0805 04:48:36.268987   11427 main.go:141] libmachine: STDERR: 
	I0805 04:48:36.269054   11427 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2 +20000M
	I0805 04:48:36.278809   11427 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 04:48:36.278830   11427 main.go:141] libmachine: STDERR: 
	I0805 04:48:36.278856   11427 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2
	I0805 04:48:36.278863   11427 main.go:141] libmachine: Starting QEMU VM...
	I0805 04:48:36.278875   11427 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:36.278905   11427 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:77:a5:6f:1a:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2
	I0805 04:48:36.280641   11427 main.go:141] libmachine: STDOUT: 
	I0805 04:48:36.280656   11427 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:36.280672   11427 client.go:171] duration metric: took 306.59125ms to LocalClient.Create
	I0805 04:48:38.282897   11427 start.go:128] duration metric: took 2.36380325s to createHost
	I0805 04:48:38.282970   11427 start.go:83] releasing machines lock for "newest-cni-332000", held for 2.364173625s
	W0805 04:48:38.283419   11427 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-332000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-332000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:38.292942   11427 out.go:177] 
	W0805 04:48:38.297149   11427 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:38.297175   11427 out.go:239] * 
	* 
	W0805 04:48:38.299676   11427 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:48:38.311942   11427 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (67.643709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-780000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-780000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.5237425s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-780000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-780000" primary control-plane node in "default-k8s-diff-port-780000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-780000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-780000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:29.460296   11445 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:29.460416   11445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:29.460419   11445 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:29.460421   11445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:29.460545   11445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:29.461535   11445 out.go:298] Setting JSON to false
	I0805 04:48:29.477712   11445 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6479,"bootTime":1722852030,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:48:29.477776   11445 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:48:29.482087   11445 out.go:177] * [default-k8s-diff-port-780000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:48:29.488015   11445 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:48:29.488079   11445 notify.go:220] Checking for updates...
	I0805 04:48:29.494893   11445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:48:29.497958   11445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:48:29.500973   11445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:48:29.502497   11445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:48:29.505933   11445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:48:29.509281   11445 config.go:182] Loaded profile config "default-k8s-diff-port-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:48:29.509544   11445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:48:29.513789   11445 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:48:29.520941   11445 start.go:297] selected driver: qemu2
	I0805 04:48:29.520950   11445 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:29.521014   11445 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:48:29.523265   11445 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 04:48:29.523288   11445 cni.go:84] Creating CNI manager for ""
	I0805 04:48:29.523295   11445 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:48:29.523323   11445 start.go:340] cluster config:
	{Name:default-k8s-diff-port-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-780000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:29.526859   11445 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:29.534935   11445 out.go:177] * Starting "default-k8s-diff-port-780000" primary control-plane node in "default-k8s-diff-port-780000" cluster
	I0805 04:48:29.538985   11445 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:48:29.539001   11445 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:48:29.539012   11445 cache.go:56] Caching tarball of preloaded images
	I0805 04:48:29.539078   11445 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:48:29.539083   11445 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 04:48:29.539168   11445 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/default-k8s-diff-port-780000/config.json ...
	I0805 04:48:29.539683   11445 start.go:360] acquireMachinesLock for default-k8s-diff-port-780000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:30.862337   11445 start.go:364] duration metric: took 1.322620375s to acquireMachinesLock for "default-k8s-diff-port-780000"
	I0805 04:48:30.862475   11445 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:48:30.862529   11445 fix.go:54] fixHost starting: 
	I0805 04:48:30.863191   11445 fix.go:112] recreateIfNeeded on default-k8s-diff-port-780000: state=Stopped err=<nil>
	W0805 04:48:30.863239   11445 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:48:30.867940   11445 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-780000" ...
	I0805 04:48:30.879800   11445 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:30.880036   11445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:ed:c5:fd:b2:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2
	I0805 04:48:30.889964   11445 main.go:141] libmachine: STDOUT: 
	I0805 04:48:30.890038   11445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:30.890153   11445 fix.go:56] duration metric: took 27.63275ms for fixHost
	I0805 04:48:30.890173   11445 start.go:83] releasing machines lock for "default-k8s-diff-port-780000", held for 27.797875ms
	W0805 04:48:30.890208   11445 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:30.890420   11445 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:30.890439   11445 start.go:729] Will try again in 5 seconds ...
	I0805 04:48:35.892811   11445 start.go:360] acquireMachinesLock for default-k8s-diff-port-780000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:35.893300   11445 start.go:364] duration metric: took 366.75µs to acquireMachinesLock for "default-k8s-diff-port-780000"
	I0805 04:48:35.893426   11445 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:48:35.893448   11445 fix.go:54] fixHost starting: 
	I0805 04:48:35.894227   11445 fix.go:112] recreateIfNeeded on default-k8s-diff-port-780000: state=Stopped err=<nil>
	W0805 04:48:35.894254   11445 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:48:35.903663   11445 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-780000" ...
	I0805 04:48:35.908608   11445 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:35.908895   11445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:ed:c5:fd:b2:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/default-k8s-diff-port-780000/disk.qcow2
	I0805 04:48:35.918475   11445 main.go:141] libmachine: STDOUT: 
	I0805 04:48:35.918553   11445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:35.918654   11445 fix.go:56] duration metric: took 25.205959ms for fixHost
	I0805 04:48:35.918669   11445 start.go:83] releasing machines lock for "default-k8s-diff-port-780000", held for 25.344625ms
	W0805 04:48:35.918892   11445 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-780000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-780000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:35.932706   11445 out.go:177] 
	W0805 04:48:35.936668   11445 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:35.936705   11445 out.go:239] * 
	* 
	W0805 04:48:35.938572   11445 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:48:35.947653   11445 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-780000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (48.347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-780000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (34.762958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-780000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-780000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-780000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.0645ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-780000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-780000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (33.53175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-780000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (30.272792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-780000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-780000 --alsologtostderr -v=1: exit status 83 (40.387292ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-780000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:36.211573   11467 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:36.211732   11467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:36.211738   11467 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:36.211740   11467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:36.211868   11467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:36.212098   11467 out.go:298] Setting JSON to false
	I0805 04:48:36.212104   11467 mustload.go:65] Loading cluster: default-k8s-diff-port-780000
	I0805 04:48:36.212307   11467 config.go:182] Loaded profile config "default-k8s-diff-port-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:48:36.214193   11467 out.go:177] * The control-plane node default-k8s-diff-port-780000 host is not running: state=Stopped
	I0805 04:48:36.217605   11467 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-780000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-780000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (30.41925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-780000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (29.549834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.186768209s)

                                                
                                                
-- stdout --
	* [newest-cni-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-332000" primary control-plane node in "newest-cni-332000" cluster
	* Restarting existing qemu2 VM for "newest-cni-332000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-332000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:41.573256   11515 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:41.573404   11515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:41.573407   11515 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:41.573409   11515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:41.573535   11515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:41.574513   11515 out.go:298] Setting JSON to false
	I0805 04:48:41.590468   11515 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6491,"bootTime":1722852030,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:48:41.590533   11515 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:48:41.594495   11515 out.go:177] * [newest-cni-332000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:48:41.605468   11515 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:48:41.605494   11515 notify.go:220] Checking for updates...
	I0805 04:48:41.613310   11515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:48:41.616336   11515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:48:41.619319   11515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:48:41.622262   11515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:48:41.625363   11515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:48:41.628733   11515 config.go:182] Loaded profile config "newest-cni-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 04:48:41.629022   11515 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:48:41.633253   11515 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:48:41.640375   11515 start.go:297] selected driver: qemu2
	I0805 04:48:41.640383   11515 start.go:901] validating driver "qemu2" against &{Name:newest-cni-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:41.640442   11515 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:48:41.642801   11515 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0805 04:48:41.642834   11515 cni.go:84] Creating CNI manager for ""
	I0805 04:48:41.642839   11515 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:48:41.642866   11515 start.go:340] cluster config:
	{Name:newest-cni-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-332000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:48:41.646431   11515 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:48:41.654336   11515 out.go:177] * Starting "newest-cni-332000" primary control-plane node in "newest-cni-332000" cluster
	I0805 04:48:41.658307   11515 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 04:48:41.658324   11515 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 04:48:41.658336   11515 cache.go:56] Caching tarball of preloaded images
	I0805 04:48:41.658403   11515 preload.go:172] Found /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 04:48:41.658415   11515 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 04:48:41.658472   11515 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/newest-cni-332000/config.json ...
	I0805 04:48:41.658983   11515 start.go:360] acquireMachinesLock for newest-cni-332000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:41.659017   11515 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "newest-cni-332000"
	I0805 04:48:41.659025   11515 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:48:41.659030   11515 fix.go:54] fixHost starting: 
	I0805 04:48:41.659153   11515 fix.go:112] recreateIfNeeded on newest-cni-332000: state=Stopped err=<nil>
	W0805 04:48:41.659163   11515 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:48:41.662373   11515 out.go:177] * Restarting existing qemu2 VM for "newest-cni-332000" ...
	I0805 04:48:41.669339   11515 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:41.669380   11515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:77:a5:6f:1a:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2
	I0805 04:48:41.671500   11515 main.go:141] libmachine: STDOUT: 
	I0805 04:48:41.671520   11515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:41.671549   11515 fix.go:56] duration metric: took 12.518375ms for fixHost
	I0805 04:48:41.671554   11515 start.go:83] releasing machines lock for "newest-cni-332000", held for 12.532042ms
	W0805 04:48:41.671561   11515 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:41.671598   11515 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:41.671603   11515 start.go:729] Will try again in 5 seconds ...
	I0805 04:48:46.673960   11515 start.go:360] acquireMachinesLock for newest-cni-332000: {Name:mk5fbaa6aad6fde5190234c4f35634a73666427d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 04:48:46.674434   11515 start.go:364] duration metric: took 346.833µs to acquireMachinesLock for "newest-cni-332000"
	I0805 04:48:46.674556   11515 start.go:96] Skipping create...Using existing machine configuration
	I0805 04:48:46.674575   11515 fix.go:54] fixHost starting: 
	I0805 04:48:46.675258   11515 fix.go:112] recreateIfNeeded on newest-cni-332000: state=Stopped err=<nil>
	W0805 04:48:46.675288   11515 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 04:48:46.682767   11515 out.go:177] * Restarting existing qemu2 VM for "newest-cni-332000" ...
	I0805 04:48:46.687857   11515 qemu.go:418] Using hvf for hardware acceleration
	I0805 04:48:46.688028   11515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:77:a5:6f:1a:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19377-7130/.minikube/machines/newest-cni-332000/disk.qcow2
	I0805 04:48:46.696981   11515 main.go:141] libmachine: STDOUT: 
	I0805 04:48:46.697039   11515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 04:48:46.697100   11515 fix.go:56] duration metric: took 22.522875ms for fixHost
	I0805 04:48:46.697112   11515 start.go:83] releasing machines lock for "newest-cni-332000", held for 22.628542ms
	W0805 04:48:46.697308   11515 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-332000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-332000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 04:48:46.704769   11515 out.go:177] 
	W0805 04:48:46.708699   11515 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 04:48:46.708832   11515 out.go:239] * 
	* 
	W0805 04:48:46.711557   11515 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:48:46.718741   11515 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (66.635333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-332000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (30.064792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-332000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-332000 --alsologtostderr -v=1: exit status 83 (41.342542ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-332000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-332000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:48:46.902694   11529 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:48:46.902843   11529 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:46.902846   11529 out.go:304] Setting ErrFile to fd 2...
	I0805 04:48:46.902848   11529 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:48:46.902987   11529 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:48:46.903221   11529 out.go:298] Setting JSON to false
	I0805 04:48:46.903227   11529 mustload.go:65] Loading cluster: newest-cni-332000
	I0805 04:48:46.903434   11529 config.go:182] Loaded profile config "newest-cni-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 04:48:46.907523   11529 out.go:177] * The control-plane node newest-cni-332000 host is not running: state=Stopped
	I0805 04:48:46.911559   11529 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-332000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-332000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (29.400167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-332000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (30.209208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 7.86
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-rc.0/json-events 7.67
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.28
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.21
48 TestErrorSpam/start 0.38
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 8.8
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.68
64 TestFunctional/serial/CacheCmd/cache/add_local 1.05
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.22
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.21
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.87
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
135 TestFunctional/parallel/ProfileCmd/profile_list 0.08
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.34
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 0.95
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.47
267 TestNoKubernetes/serial/Stop 3.49
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
283 TestStartStop/group/old-k8s-version/serial/Stop 3.26
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.74
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
295 TestStartStop/group/no-preload/serial/Stop 1.76
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
308 TestStartStop/group/embed-certs/serial/Stop 1.92
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.58
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 2.97
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-095000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-095000: exit status 85 (95.852333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |          |
	|         | -p download-only-095000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 04:22:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 04:22:07.439819    7626 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:22:07.439959    7626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:22:07.439962    7626 out.go:304] Setting ErrFile to fd 2...
	I0805 04:22:07.439964    7626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:22:07.440084    7626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	W0805 04:22:07.440169    7626 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19377-7130/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19377-7130/.minikube/config/config.json: no such file or directory
	I0805 04:22:07.441380    7626 out.go:298] Setting JSON to true
	I0805 04:22:07.457642    7626 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4897,"bootTime":1722852030,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:22:07.457717    7626 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:22:07.462632    7626 out.go:97] [download-only-095000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:22:07.462785    7626 notify.go:220] Checking for updates...
	W0805 04:22:07.462881    7626 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 04:22:07.466318    7626 out.go:169] MINIKUBE_LOCATION=19377
	I0805 04:22:07.469324    7626 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:22:07.474314    7626 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:22:07.477233    7626 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:22:07.480338    7626 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	W0805 04:22:07.486180    7626 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 04:22:07.486374    7626 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:22:07.489209    7626 out.go:97] Using the qemu2 driver based on user configuration
	I0805 04:22:07.489228    7626 start.go:297] selected driver: qemu2
	I0805 04:22:07.489231    7626 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:22:07.489301    7626 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:22:07.492342    7626 out.go:169] Automatically selected the socket_vmnet network
	I0805 04:22:07.497714    7626 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 04:22:07.497796    7626 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 04:22:07.497842    7626 cni.go:84] Creating CNI manager for ""
	I0805 04:22:07.497858    7626 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 04:22:07.497910    7626 start.go:340] cluster config:
	{Name:download-only-095000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-095000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:22:07.501816    7626 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:22:07.506287    7626 out.go:97] Downloading VM boot image ...
	I0805 04:22:07.506302    7626 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0805 04:22:15.654351    7626 out.go:97] Starting "download-only-095000" primary control-plane node in "download-only-095000" cluster
	I0805 04:22:15.654378    7626 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 04:22:15.711237    7626 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 04:22:15.711243    7626 cache.go:56] Caching tarball of preloaded images
	I0805 04:22:15.711398    7626 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 04:22:15.716469    7626 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 04:22:15.716476    7626 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 04:22:15.792425    7626 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 04:22:21.354204    7626 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 04:22:21.354359    7626 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 04:22:22.048868    7626 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 04:22:22.049069    7626 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/download-only-095000/config.json ...
	I0805 04:22:22.049087    7626 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19377-7130/.minikube/profiles/download-only-095000/config.json: {Name:mke8a6efef77f0e2f34a481607e36c77e7e08333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 04:22:22.049328    7626 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 04:22:22.049532    7626 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0805 04:22:22.426872    7626 out.go:169] 
	W0805 04:22:22.431973    7626 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19377-7130/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80 0x1067a1a80] Decompressors:map[bz2:0x140008009b0 gz:0x140008009b8 tar:0x14000800930 tar.bz2:0x14000800960 tar.gz:0x14000800980 tar.xz:0x14000800990 tar.zst:0x140008009a0 tbz2:0x14000800960 tgz:0x14000800980 txz:0x14000800990 tzst:0x140008009a0 xz:0x140008009c0 zip:0x140008009d0 zst:0x140008009c8] Getters:map[file:0x14000701490 http:0x140006a0280 https:0x140006a02d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0805 04:22:22.431996    7626 out_reason.go:110] 
	W0805 04:22:22.439947    7626 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 04:22:22.443854    7626 out.go:169] 
	
	
	* The control-plane node download-only-095000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-095000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-095000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-741000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-741000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (7.863582125s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-741000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-741000: exit status 85 (80.821875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | -p download-only-095000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| delete  | -p download-only-095000        | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| start   | -o=json --download-only        | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | -p download-only-741000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 04:22:22
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 04:22:22.861567    7656 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:22:22.861696    7656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:22:22.861699    7656 out.go:304] Setting ErrFile to fd 2...
	I0805 04:22:22.861702    7656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:22:22.861826    7656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:22:22.862856    7656 out.go:298] Setting JSON to true
	I0805 04:22:22.879036    7656 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4912,"bootTime":1722852030,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:22:22.879096    7656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:22:22.883713    7656 out.go:97] [download-only-741000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:22:22.883789    7656 notify.go:220] Checking for updates...
	I0805 04:22:22.887827    7656 out.go:169] MINIKUBE_LOCATION=19377
	I0805 04:22:22.894778    7656 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:22:22.902784    7656 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:22:22.905894    7656 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:22:22.908805    7656 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	W0805 04:22:22.915784    7656 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 04:22:22.915947    7656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:22:22.919833    7656 out.go:97] Using the qemu2 driver based on user configuration
	I0805 04:22:22.919842    7656 start.go:297] selected driver: qemu2
	I0805 04:22:22.919846    7656 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:22:22.919888    7656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:22:22.923846    7656 out.go:169] Automatically selected the socket_vmnet network
	I0805 04:22:22.929197    7656 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 04:22:22.929299    7656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 04:22:22.929323    7656 cni.go:84] Creating CNI manager for ""
	I0805 04:22:22.929331    7656 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:22:22.929339    7656 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:22:22.929382    7656 start.go:340] cluster config:
	{Name:download-only-741000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-741000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:22:22.932823    7656 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:22:22.935817    7656 out.go:97] Starting "download-only-741000" primary control-plane node in "download-only-741000" cluster
	I0805 04:22:22.935824    7656 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:22:22.986193    7656 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 04:22:22.986216    7656 cache.go:56] Caching tarball of preloaded images
	I0805 04:22:22.986390    7656 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 04:22:22.989936    7656 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0805 04:22:22.989943    7656 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0805 04:22:23.062230    7656 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-741000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-741000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-741000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (7.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-638000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-638000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 : (7.668707334s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (7.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-638000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-638000: exit status 85 (79.772958ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | -p download-only-095000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| delete  | -p download-only-095000           | download-only-095000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| start   | -o=json --download-only           | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | -p download-only-741000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| delete  | -p download-only-741000           | download-only-741000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT | 05 Aug 24 04:22 PDT |
	| start   | -o=json --download-only           | download-only-638000 | jenkins | v1.33.1 | 05 Aug 24 04:22 PDT |                     |
	|         | -p download-only-638000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 04:22:31
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 04:22:31.018609    7683 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:22:31.018747    7683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:22:31.018751    7683 out.go:304] Setting ErrFile to fd 2...
	I0805 04:22:31.018753    7683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:22:31.018893    7683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:22:31.019915    7683 out.go:298] Setting JSON to true
	I0805 04:22:31.035800    7683 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4921,"bootTime":1722852030,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:22:31.035877    7683 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:22:31.039977    7683 out.go:97] [download-only-638000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:22:31.040078    7683 notify.go:220] Checking for updates...
	I0805 04:22:31.043881    7683 out.go:169] MINIKUBE_LOCATION=19377
	I0805 04:22:31.049457    7683 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:22:31.053930    7683 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:22:31.056887    7683 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:22:31.059850    7683 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	W0805 04:22:31.065791    7683 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 04:22:31.065946    7683 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:22:31.068863    7683 out.go:97] Using the qemu2 driver based on user configuration
	I0805 04:22:31.068874    7683 start.go:297] selected driver: qemu2
	I0805 04:22:31.068879    7683 start.go:901] validating driver "qemu2" against <nil>
	I0805 04:22:31.068964    7683 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 04:22:31.071855    7683 out.go:169] Automatically selected the socket_vmnet network
	I0805 04:22:31.076904    7683 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 04:22:31.076981    7683 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 04:22:31.076998    7683 cni.go:84] Creating CNI manager for ""
	I0805 04:22:31.077008    7683 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 04:22:31.077023    7683 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 04:22:31.077072    7683 start.go:340] cluster config:
	{Name:download-only-638000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:22:31.080669    7683 iso.go:125] acquiring lock: {Name:mk776e2858d1302eea61300b47938de41fafcf46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 04:22:31.083859    7683 out.go:97] Starting "download-only-638000" primary control-plane node in "download-only-638000" cluster
	I0805 04:22:31.083866    7683 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 04:22:31.137751    7683 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 04:22:31.137769    7683 cache.go:56] Caching tarball of preloaded images
	I0805 04:22:31.137926    7683 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 04:22:31.141915    7683 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0805 04:22:31.141930    7683 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 04:22:31.224261    7683 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /Users/jenkins/minikube-integration/19377-7130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-638000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-638000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-638000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-737000 --alsologtostderr --binary-mirror http://127.0.0.1:51013 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-737000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-737000
--- PASS: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-939000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-939000: exit status 85 (55.53175ms)

                                                
                                                
-- stdout --
	* Profile "addons-939000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-939000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-939000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-939000: exit status 85 (59.449ms)

                                                
                                                
-- stdout --
	* Profile "addons-939000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-939000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.21s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.21s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 status: exit status 7 (30.822875ms)

                                                
                                                
-- stdout --
	nospam-993000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 status: exit status 7 (30.066708ms)

                                                
                                                
-- stdout --
	nospam-993000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 status: exit status 7 (30.456666ms)

                                                
                                                
-- stdout --
	nospam-993000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 pause: exit status 83 (39.867375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-993000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-993000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 pause: exit status 83 (39.919916ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-993000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-993000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 pause: exit status 83 (39.684666ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-993000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-993000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 unpause: exit status 83 (39.910959ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-993000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-993000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 unpause: exit status 83 (38.856292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-993000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-993000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 unpause: exit status 83 (38.8035ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-993000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-993000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (8.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 stop: (2.084150125s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 stop: (3.409035375s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-993000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-993000 stop: (3.304824458s)
--- PASS: TestErrorSpam/stop (8.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19377-7130/.minikube/files/etc/test/nested/copy/7624/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local486199388/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 cache add minikube-local-cache-test:functional-814000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 cache delete minikube-local-cache-test:functional-814000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-814000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 config get cpus: exit status 14 (31.18425ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 config get cpus: exit status 14 (33.27375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-814000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-814000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (166.846334ms)

                                                
                                                
-- stdout --
	* [functional-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:24:16.112419    8273 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:24:16.112624    8273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:24:16.112629    8273 out.go:304] Setting ErrFile to fd 2...
	I0805 04:24:16.112632    8273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:24:16.112862    8273 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:24:16.114374    8273 out.go:298] Setting JSON to false
	I0805 04:24:16.134781    8273 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5026,"bootTime":1722852030,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:24:16.134861    8273 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:24:16.140322    8273 out.go:177] * [functional-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 04:24:16.147310    8273 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:24:16.147344    8273 notify.go:220] Checking for updates...
	I0805 04:24:16.154290    8273 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:24:16.157307    8273 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:24:16.160265    8273 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:24:16.163302    8273 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:24:16.166298    8273 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:24:16.169517    8273 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:24:16.169839    8273 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:24:16.174309    8273 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 04:24:16.181233    8273 start.go:297] selected driver: qemu2
	I0805 04:24:16.181237    8273 start.go:901] validating driver "qemu2" against &{Name:functional-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:24:16.181284    8273 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:24:16.188302    8273 out.go:177] 
	W0805 04:24:16.191271    8273 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0805 04:24:16.195272    8273 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-814000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-814000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-814000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.905667ms)

                                                
                                                
-- stdout --
	* [functional-814000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 04:24:16.340893    8284 out.go:291] Setting OutFile to fd 1 ...
	I0805 04:24:16.340991    8284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:24:16.340994    8284 out.go:304] Setting ErrFile to fd 2...
	I0805 04:24:16.340996    8284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 04:24:16.341119    8284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19377-7130/.minikube/bin
	I0805 04:24:16.342497    8284 out.go:298] Setting JSON to false
	I0805 04:24:16.359238    8284 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5026,"bootTime":1722852030,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 04:24:16.359319    8284 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 04:24:16.364280    8284 out.go:177] * [functional-814000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0805 04:24:16.371285    8284 out.go:177]   - MINIKUBE_LOCATION=19377
	I0805 04:24:16.371343    8284 notify.go:220] Checking for updates...
	I0805 04:24:16.378295    8284 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	I0805 04:24:16.381332    8284 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 04:24:16.384265    8284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 04:24:16.387263    8284 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	I0805 04:24:16.390315    8284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 04:24:16.393566    8284 config.go:182] Loaded profile config "functional-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 04:24:16.393832    8284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 04:24:16.398287    8284 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0805 04:24:16.405201    8284 start.go:297] selected driver: qemu2
	I0805 04:24:16.405207    8284 start.go:901] validating driver "qemu2" against &{Name:functional-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 04:24:16.405266    8284 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 04:24:16.412309    8284 out.go:177] 
	W0805 04:24:16.415300    8284 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0805 04:24:16.419256    8284 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.835741208s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-814000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-814000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image rm docker.io/kicbase/echo-server:functional-814000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-814000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 image save --daemon docker.io/kicbase/echo-server:functional-814000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-814000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "45.6325ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "34.093542ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "48.70475ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.582375ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.010961834s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-814000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-814000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-814000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-814000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-928000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-928000 --output=json --user=testUser: (3.343671708s)
--- PASS: TestJSONOutput/stop/Command (3.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-419000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-419000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.917708ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6862842c-b104-4e40-825c-a8a8a2a84071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-419000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a32964f1-e807-4de1-85d3-b4d65d352bca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19377"}}
	{"specversion":"1.0","id":"597a7fac-fc85-401b-b679-06929aea84ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig"}}
	{"specversion":"1.0","id":"a15bb567-5e94-4fde-8425-30d893343a63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8566c482-a5ae-4f35-8871-8290fa12d96d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c2f6b80b-dbaa-41f7-a358-aba25681c6c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube"}}
	{"specversion":"1.0","id":"05730a58-cb51-48af-8ed2-3a1e8441bbdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8008cfda-cb20-44fe-9a22-f54c8e5cedae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-419000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-419000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (95.751ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-839000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19377
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19377-7130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19377-7130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-839000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-839000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.553208ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-839000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-839000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.676331833s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.794038041s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-839000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-839000: (3.491152292s)
--- PASS: TestNoKubernetes/serial/Stop (3.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-839000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-839000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.537417ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-839000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-839000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-207000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-207000 --alsologtostderr -v=3: (3.264485708s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-528000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-207000 -n old-k8s-version-207000: exit status 7 (30.543875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-207000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-049000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-049000 --alsologtostderr -v=3: (1.760073959s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (44.396292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-049000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-407000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-407000 --alsologtostderr -v=3: (1.917912417s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-407000 -n embed-certs-407000: exit status 7 (57.56275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-407000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-780000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-780000 --alsologtostderr -v=3: (3.582779375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-780000 -n default-k8s-diff-port-780000: exit status 7 (62.420833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-780000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-332000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-332000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-332000 --alsologtostderr -v=3: (2.969154625s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (58.446708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-332000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port80972817/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722857021170225000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port80972817/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722857021170225000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port80972817/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722857021170225000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port80972817/001/test-1722857021170225000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (55.273709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.218542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.820917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.584333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.999416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.08425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.28475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo umount -f /mount-9p": exit status 83 (42.760333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-814000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port80972817/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2015391444/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.872625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.334667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.087167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.122792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.239084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.778833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (81.836375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "sudo umount -f /mount-9p": exit status 83 (45.597459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-814000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2015391444/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (15.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3692665705/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3692665705/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3692665705/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1: exit status 83 (82.764458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1: exit status 83 (84.677959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1: exit status 83 (90.759541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1: exit status 83 (89.167458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1: exit status 83 (85.279833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1: exit status 83 (86.794334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1: exit status 83 (86.228292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-814000 ssh "findmnt -T" /mount1: exit status 83 (86.04425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-814000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3692665705/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3692665705/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-814000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3692665705/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (15.09s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-816000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-816000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-816000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-816000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-816000"

                                                
                                                
----------------------- debugLogs end: cilium-816000 [took: 2.193065041s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-816000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-816000
--- SKIP: TestNetworkPlugins/group/cilium (2.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-988000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-988000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard