Test Report: QEMU_macOS 19787

                    
                      c1252a7f2092ae156b37572b060158ae23786afe:2024-10-10:36592
                    
                

Test fail (157/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.42
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.27
22 TestOffline 10.05
27 TestAddons/Setup 10.31
28 TestCertOptions 10.13
29 TestCertExpiration 195.19
30 TestDockerFlags 10.2
31 TestForceSystemdFlag 10.24
32 TestForceSystemdEnv 10.13
38 TestErrorSpam/setup 9.88
47 TestFunctional/serial/StartWithProxy 9.95
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.18
61 TestFunctional/serial/MinikubeKubectlCmd 2.12
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.19
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.13
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.31
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.32
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/Version/components 0.05
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.09
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 107.63
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.33
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.06
141 TestMultiControlPlane/serial/StartCluster 9.88
142 TestMultiControlPlane/serial/DeployApp 104.15
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.12
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 51.55
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 9.06
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.12
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 3.35
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 9.83
165 TestJSONOutput/start/Command 9.74
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.06
194 TestMinikubeProfile 10.21
197 TestMountStart/serial/StartWithMountFirst 9.93
200 TestMultiNode/serial/FreshStart2Nodes 10.03
201 TestMultiNode/serial/DeployApp2Nodes 91.36
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.15
208 TestMultiNode/serial/StartAfterStop 51.96
209 TestMultiNode/serial/RestartKeepsNodes 8.71
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 3.61
212 TestMultiNode/serial/RestartMultiNode 5.27
213 TestMultiNode/serial/ValidateNameConflict 20.18
217 TestPreload 10.02
219 TestScheduledStopUnix 9.95
220 TestSkaffold 12.78
223 TestRunningBinaryUpgrade 606.59
225 TestKubernetesUpgrade 18.88
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 0.98
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.12
241 TestStoppedBinaryUpgrade/Upgrade 576.61
243 TestPause/serial/Start 10.15
253 TestNoKubernetes/serial/StartWithK8s 9.91
254 TestNoKubernetes/serial/StartWithStopK8s 5.33
255 TestNoKubernetes/serial/Start 5.3
259 TestNoKubernetes/serial/StartNoArgs 5.35
261 TestNetworkPlugins/group/auto/Start 9.8
262 TestNetworkPlugins/group/kindnet/Start 9.85
263 TestNetworkPlugins/group/calico/Start 9.87
264 TestNetworkPlugins/group/custom-flannel/Start 9.81
265 TestNetworkPlugins/group/false/Start 9.9
266 TestNetworkPlugins/group/enable-default-cni/Start 10.09
267 TestNetworkPlugins/group/flannel/Start 9.89
268 TestNetworkPlugins/group/bridge/Start 9.9
269 TestNetworkPlugins/group/kubenet/Start 10.13
272 TestStartStop/group/old-k8s-version/serial/FirstStart 10.01
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.11
283 TestStartStop/group/no-preload/serial/FirstStart 9.81
285 TestStartStop/group/embed-certs/serial/FirstStart 11.21
286 TestStartStop/group/no-preload/serial/DeployApp 0.11
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.26
289 TestStartStop/group/embed-certs/serial/DeployApp 0.1
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
293 TestStartStop/group/no-preload/serial/SecondStart 5.27
295 TestStartStop/group/embed-certs/serial/SecondStart 6.57
296 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
299 TestStartStop/group/no-preload/serial/Pause 0.11
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.96
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
305 TestStartStop/group/embed-certs/serial/Pause 0.11
307 TestStartStop/group/newest-cni/serial/FirstStart 10.02
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
317 TestStartStop/group/newest-cni/serial/SecondStart 5.27
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (18.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-370000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-370000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (18.413899666s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"072d5fba-c154-4d28-9a84-bf40d7edb389","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-370000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"18756854-26d5-4ef9-a3d3-ca1f49a013a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19787"}}
	{"specversion":"1.0","id":"3d44ed28-dd2a-4876-807e-2ef4fa963040","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig"}}
	{"specversion":"1.0","id":"4b8d8b94-e776-4eb8-93d4-bf8a1a9424aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"902af10a-28af-4045-9bf9-b9e4b8bf8848","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"11272474-2b1e-402e-ae6e-425b1aebb38e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube"}}
	{"specversion":"1.0","id":"e41e369c-a57a-4087-86a6-ab18f147d35e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"1109e7df-8362-4fd2-95d5-79fb25d2c914","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"67570410-c1ec-48f9-aee9-2c3f0b4a104c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"dddd9ad1-927a-4b10-96a2-0969a8472862","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"182bee10-72d8-426d-963e-4491b5c40bd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-370000\" primary control-plane node in \"download-only-370000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e32f588-d9f9-48ae-9f1d-fd3b3623120e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e355b0a-b718-4905-9421-02bd0ce25fc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0] Decompressors:map[bz2:0x140007925b0 gz:0x140007925b8 tar:0x140007924e0 tar.bz2:0x140007924f0 tar.gz:0x14000792540 tar.xz:0x14000792550 tar.zst:0x14000792590 tbz2:0x140007924f0 tgz:0x1
4000792540 txz:0x14000792550 tzst:0x14000792590 xz:0x140007925c0 zip:0x140007925d0 zst:0x140007925c8] Getters:map[file:0x140014d4550 http:0x14000c865f0 https:0x14000c86640] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"f664ae49-4b5e-4f53-ab9d-2d5d6b2fb7ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:22:21.283635   11136 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:22:21.283818   11136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:22:21.283821   11136 out.go:358] Setting ErrFile to fd 2...
	I1010 11:22:21.283824   11136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:22:21.283941   11136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	W1010 11:22:21.284051   11136 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19787-10623/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19787-10623/.minikube/config/config.json: no such file or directory
	I1010 11:22:21.285470   11136 out.go:352] Setting JSON to true
	I1010 11:22:21.303010   11136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6712,"bootTime":1728577829,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:22:21.303086   11136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:22:21.307701   11136 out.go:97] [download-only-370000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:22:21.307857   11136 notify.go:220] Checking for updates...
	W1010 11:22:21.307909   11136 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball: no such file or directory
	I1010 11:22:21.311671   11136 out.go:169] MINIKUBE_LOCATION=19787
	I1010 11:22:21.314641   11136 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:22:21.317643   11136 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:22:21.320640   11136 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:22:21.323622   11136 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	W1010 11:22:21.328656   11136 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1010 11:22:21.328854   11136 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:22:21.331610   11136 out.go:97] Using the qemu2 driver based on user configuration
	I1010 11:22:21.331627   11136 start.go:297] selected driver: qemu2
	I1010 11:22:21.331641   11136 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:22:21.331687   11136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:22:21.334647   11136 out.go:169] Automatically selected the socket_vmnet network
	I1010 11:22:21.341021   11136 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1010 11:22:21.341116   11136 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 11:22:21.341156   11136 cni.go:84] Creating CNI manager for ""
	I1010 11:22:21.341198   11136 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1010 11:22:21.341253   11136 start.go:340] cluster config:
	{Name:download-only-370000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-370000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:22:21.345938   11136 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:22:21.349636   11136 out.go:97] Downloading VM boot image ...
	I1010 11:22:21.349650   11136 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso
	I1010 11:22:30.291956   11136 out.go:97] Starting "download-only-370000" primary control-plane node in "download-only-370000" cluster
	I1010 11:22:30.291987   11136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1010 11:22:30.363056   11136 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1010 11:22:30.363084   11136 cache.go:56] Caching tarball of preloaded images
	I1010 11:22:30.363329   11136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1010 11:22:30.368422   11136 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1010 11:22:30.368430   11136 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1010 11:22:30.465136   11136 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1010 11:22:38.354599   11136 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1010 11:22:38.354766   11136 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1010 11:22:39.048088   11136 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1010 11:22:39.048294   11136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/download-only-370000/config.json ...
	I1010 11:22:39.048310   11136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/download-only-370000/config.json: {Name:mk8adaed966bd55990f86cf0fe6964be518521c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:22:39.048556   11136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1010 11:22:39.048792   11136 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1010 11:22:39.620260   11136 out.go:193] 
	W1010 11:22:39.623373   11136 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0] Decompressors:map[bz2:0x140007925b0 gz:0x140007925b8 tar:0x140007924e0 tar.bz2:0x140007924f0 tar.gz:0x14000792540 tar.xz:0x14000792550 tar.zst:0x14000792590 tbz2:0x140007924f0 tgz:0x14000792540 txz:0x14000792550 tzst:0x14000792590 xz:0x140007925c0 zip:0x140007925d0 zst:0x140007925c8] Getters:map[file:0x140014d4550 http:0x14000c865f0 https:0x14000c86640] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1010 11:22:39.623408   11136 out_reason.go:110] 
	W1010 11:22:39.630277   11136 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:22:39.634296   11136 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-370000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (18.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.27s)

                                                
                                                
=== RUN   TestBinaryMirror
I1010 11:22:48.695623   11135 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-999000 --alsologtostderr --binary-mirror http://127.0.0.1:53123 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-999000 --alsologtostderr --binary-mirror http://127.0.0.1:53123 --driver=qemu2 : exit status 40 (159.433917ms)

                                                
                                                
-- stdout --
	* [binary-mirror-999000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-999000" primary control-plane node in "binary-mirror-999000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:22:48.757809   11199 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:22:48.757952   11199 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:22:48.757955   11199 out.go:358] Setting ErrFile to fd 2...
	I1010 11:22:48.757958   11199 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:22:48.758074   11199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:22:48.759225   11199 out.go:352] Setting JSON to false
	I1010 11:22:48.776901   11199 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6739,"bootTime":1728577829,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:22:48.776966   11199 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:22:48.782078   11199 out.go:177] * [binary-mirror-999000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:22:48.790084   11199 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:22:48.790139   11199 notify.go:220] Checking for updates...
	I1010 11:22:48.795575   11199 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:22:48.799039   11199 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:22:48.802117   11199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:22:48.805078   11199 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:22:48.808198   11199 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:22:48.812093   11199 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:22:48.819117   11199 start.go:297] selected driver: qemu2
	I1010 11:22:48.819123   11199 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:22:48.819163   11199 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:22:48.822123   11199 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:22:48.827571   11199 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1010 11:22:48.827668   11199 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 11:22:48.827685   11199 cni.go:84] Creating CNI manager for ""
	I1010 11:22:48.827710   11199 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:22:48.827717   11199 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:22:48.827763   11199 start.go:340] cluster config:
	{Name:binary-mirror-999000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-999000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:53123 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:22:48.832375   11199 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:22:48.840091   11199 out.go:177] * Starting "binary-mirror-999000" primary control-plane node in "binary-mirror-999000" cluster
	I1010 11:22:48.842970   11199 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:22:48.842985   11199 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:22:48.842995   11199 cache.go:56] Caching tarball of preloaded images
	I1010 11:22:48.843087   11199 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:22:48.843092   11199 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:22:48.843299   11199 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/binary-mirror-999000/config.json ...
	I1010 11:22:48.843311   11199 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/binary-mirror-999000/config.json: {Name:mkf07b4df1232c4a2dccb5333ed0bb519d3ae972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:22:48.843630   11199 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:22:48.843690   11199 download.go:107] Downloading: http://127.0.0.1:53123/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:53123/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I1010 11:22:48.862193   11199 out.go:201] 
	W1010 11:22:48.866062   11199 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:53123/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:53123/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:53123/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:53123/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1093c8fe0 0x1093c8fe0 0x1093c8fe0 0x1093c8fe0 0x1093c8fe0 0x1093c8fe0 0x1093c8fe0] Decompressors:map[bz2:0x1400000f2e0 gz:0x1400000f2e8 tar:0x1400000f290 tar.bz2:0x1400000f2a0 tar.gz:0x1400000f2b0 tar.xz:0x1400000f2c0 tar.zst:0x1400000f2d0 tbz2:0x1400000f2a0 tgz:0x1400000f2b0 txz:0x1400000f2c0 tzst:0x1400000f2d0 xz:0x1400000f2f0 zip:0x1400000f300 zst:0x1400000f2f8] Getters:map[file:0x140015286c0 http:0x14000c1f040 https:0x14000c1f090] Dir
:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:53123/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:53123/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:53123/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:53123/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1093c8fe0 0x1093c8fe0 0x1093c8fe0 0x1093c8fe0 0x1093c8fe0 0x1093c8fe0 0x1093c8fe0] Decompressors:map[bz2:0x1400000f2e0 gz:0x1400000f2e8 tar:0x1400000f290 tar.bz2:0x1400000f2a0 tar.gz:0x1400000f2b0 tar.xz:0x1400000f2c0 tar.zst:0x1400000f2d0 tbz2:0x1400000f2a0 tgz:0x1400000f2b0 txz:0x1400000f2c0 tzst:0x1400000f2d0 xz:0x1400000f2f0 zip:0x1400000f300 zst:0x1400000f2f8] Getters:map[file:0x140015286c0 http:0x14000c1f040 https:0x14000c1f090] Dir:false ProgressListener:<nil> Insecure:fal
se DisableSymlinks:false Options:[]}: unexpected EOF
	W1010 11:22:48.866070   11199 out.go:270] * 
	* 
	W1010 11:22:48.866493   11199 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:22:48.881059   11199 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-999000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:53123" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-999000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-999000
--- FAIL: TestBinaryMirror (0.27s)

                                                
                                    
x
+
TestOffline (10.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-123000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-123000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.927183792s)

                                                
                                                
-- stdout --
	* [offline-docker-123000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-123000" primary control-plane node in "offline-docker-123000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-123000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:34:02.930658   12804 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:34:02.930843   12804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:34:02.930846   12804 out.go:358] Setting ErrFile to fd 2...
	I1010 11:34:02.930849   12804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:34:02.930976   12804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:34:02.932491   12804 out.go:352] Setting JSON to false
	I1010 11:34:02.951942   12804 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7413,"bootTime":1728577829,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:34:02.952023   12804 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:34:02.957226   12804 out.go:177] * [offline-docker-123000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:34:02.966213   12804 notify.go:220] Checking for updates...
	I1010 11:34:02.970088   12804 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:34:02.978057   12804 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:34:02.988164   12804 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:34:02.997029   12804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:34:03.001185   12804 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:34:03.005082   12804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:34:03.008524   12804 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:34:03.008584   12804 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:34:03.012142   12804 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:34:03.019113   12804 start.go:297] selected driver: qemu2
	I1010 11:34:03.019126   12804 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:34:03.019134   12804 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:34:03.021331   12804 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:34:03.024130   12804 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:34:03.027160   12804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:34:03.027179   12804 cni.go:84] Creating CNI manager for ""
	I1010 11:34:03.027199   12804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:34:03.027203   12804 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:34:03.027241   12804 start.go:340] cluster config:
	{Name:offline-docker-123000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-123000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:34:03.031747   12804 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:34:03.039088   12804 out.go:177] * Starting "offline-docker-123000" primary control-plane node in "offline-docker-123000" cluster
	I1010 11:34:03.042168   12804 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:34:03.042204   12804 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:34:03.042213   12804 cache.go:56] Caching tarball of preloaded images
	I1010 11:34:03.042311   12804 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:34:03.042317   12804 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:34:03.042397   12804 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/offline-docker-123000/config.json ...
	I1010 11:34:03.042408   12804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/offline-docker-123000/config.json: {Name:mkaf4ca95659ee8356c404c3b8a734e1c3969ba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:34:03.042694   12804 start.go:360] acquireMachinesLock for offline-docker-123000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:34:03.042745   12804 start.go:364] duration metric: took 37.916µs to acquireMachinesLock for "offline-docker-123000"
	I1010 11:34:03.042757   12804 start.go:93] Provisioning new machine with config: &{Name:offline-docker-123000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-123000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:34:03.042788   12804 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:34:03.047098   12804 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1010 11:34:03.062228   12804 start.go:159] libmachine.API.Create for "offline-docker-123000" (driver="qemu2")
	I1010 11:34:03.062254   12804 client.go:168] LocalClient.Create starting
	I1010 11:34:03.062334   12804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:34:03.062380   12804 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:03.062397   12804 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:03.062450   12804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:34:03.062486   12804 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:03.062491   12804 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:03.062852   12804 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:34:03.217913   12804 main.go:141] libmachine: Creating SSH key...
	I1010 11:34:03.414705   12804 main.go:141] libmachine: Creating Disk image...
	I1010 11:34:03.414715   12804 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:34:03.414904   12804 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2
	I1010 11:34:03.425497   12804 main.go:141] libmachine: STDOUT: 
	I1010 11:34:03.425518   12804 main.go:141] libmachine: STDERR: 
	I1010 11:34:03.425582   12804 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2 +20000M
	I1010 11:34:03.435292   12804 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:34:03.435314   12804 main.go:141] libmachine: STDERR: 
	I1010 11:34:03.435345   12804 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2
	I1010 11:34:03.435350   12804 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:34:03.435366   12804 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:34:03.435399   12804 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:47:30:2b:a2:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2
	I1010 11:34:03.437267   12804 main.go:141] libmachine: STDOUT: 
	I1010 11:34:03.437280   12804 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:34:03.437298   12804 client.go:171] duration metric: took 375.042542ms to LocalClient.Create
	I1010 11:34:05.439346   12804 start.go:128] duration metric: took 2.396569791s to createHost
	I1010 11:34:05.439391   12804 start.go:83] releasing machines lock for "offline-docker-123000", held for 2.39665325s
	W1010 11:34:05.439405   12804 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:05.446690   12804 out.go:177] * Deleting "offline-docker-123000" in qemu2 ...
	W1010 11:34:05.463043   12804 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:05.463055   12804 start.go:729] Will try again in 5 seconds ...
	I1010 11:34:10.465196   12804 start.go:360] acquireMachinesLock for offline-docker-123000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:34:10.465749   12804 start.go:364] duration metric: took 452.416µs to acquireMachinesLock for "offline-docker-123000"
	I1010 11:34:10.465852   12804 start.go:93] Provisioning new machine with config: &{Name:offline-docker-123000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-123000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:34:10.466181   12804 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:34:10.475915   12804 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1010 11:34:10.523022   12804 start.go:159] libmachine.API.Create for "offline-docker-123000" (driver="qemu2")
	I1010 11:34:10.523078   12804 client.go:168] LocalClient.Create starting
	I1010 11:34:10.523216   12804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:34:10.523297   12804 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:10.523313   12804 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:10.523381   12804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:34:10.523439   12804 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:10.523454   12804 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:10.524102   12804 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:34:10.684821   12804 main.go:141] libmachine: Creating SSH key...
	I1010 11:34:10.769508   12804 main.go:141] libmachine: Creating Disk image...
	I1010 11:34:10.769514   12804 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:34:10.769685   12804 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2
	I1010 11:34:10.779262   12804 main.go:141] libmachine: STDOUT: 
	I1010 11:34:10.779290   12804 main.go:141] libmachine: STDERR: 
	I1010 11:34:10.779350   12804 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2 +20000M
	I1010 11:34:10.787716   12804 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:34:10.787730   12804 main.go:141] libmachine: STDERR: 
	I1010 11:34:10.787750   12804 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2
	I1010 11:34:10.787755   12804 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:34:10.787762   12804 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:34:10.787796   12804 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:d6:ff:b4:f1:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/offline-docker-123000/disk.qcow2
	I1010 11:34:10.789603   12804 main.go:141] libmachine: STDOUT: 
	I1010 11:34:10.789616   12804 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:34:10.789629   12804 client.go:171] duration metric: took 266.546667ms to LocalClient.Create
	I1010 11:34:12.791709   12804 start.go:128] duration metric: took 2.325511334s to createHost
	I1010 11:34:12.791738   12804 start.go:83] releasing machines lock for "offline-docker-123000", held for 2.325986416s
	W1010 11:34:12.791817   12804 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-123000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-123000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:12.800029   12804 out.go:201] 
	W1010 11:34:12.803959   12804 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:34:12.803969   12804 out.go:270] * 
	* 
	W1010 11:34:12.804566   12804 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:34:12.816014   12804 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-123000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-10 11:34:12.822966 -0700 PDT m=+711.624018751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-123000 -n offline-docker-123000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-123000 -n offline-docker-123000: exit status 7 (39.846542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-123000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-123000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-123000
--- FAIL: TestOffline (10.05s)

                                                
                                    
x
+
TestAddons/Setup (10.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-244000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-244000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.306117416s)

                                                
                                                
-- stdout --
	* [addons-244000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-244000" primary control-plane node in "addons-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:22:49.056293   11213 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:22:49.056443   11213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:22:49.056447   11213 out.go:358] Setting ErrFile to fd 2...
	I1010 11:22:49.056449   11213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:22:49.056585   11213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:22:49.057740   11213 out.go:352] Setting JSON to false
	I1010 11:22:49.075271   11213 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6740,"bootTime":1728577829,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:22:49.075336   11213 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:22:49.080129   11213 out.go:177] * [addons-244000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:22:49.087097   11213 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:22:49.087129   11213 notify.go:220] Checking for updates...
	I1010 11:22:49.093065   11213 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:22:49.096087   11213 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:22:49.099093   11213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:22:49.102003   11213 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:22:49.105051   11213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:22:49.108258   11213 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:22:49.111058   11213 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:22:49.118084   11213 start.go:297] selected driver: qemu2
	I1010 11:22:49.118091   11213 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:22:49.118096   11213 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:22:49.120507   11213 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:22:49.121820   11213 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:22:49.125130   11213 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:22:49.125145   11213 cni.go:84] Creating CNI manager for ""
	I1010 11:22:49.125174   11213 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:22:49.125178   11213 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:22:49.125205   11213 start.go:340] cluster config:
	{Name:addons-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:22:49.129791   11213 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:22:49.137112   11213 out.go:177] * Starting "addons-244000" primary control-plane node in "addons-244000" cluster
	I1010 11:22:49.141105   11213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:22:49.141119   11213 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:22:49.141124   11213 cache.go:56] Caching tarball of preloaded images
	I1010 11:22:49.141196   11213 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:22:49.141202   11213 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:22:49.141397   11213 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/addons-244000/config.json ...
	I1010 11:22:49.141408   11213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/addons-244000/config.json: {Name:mk319a73b8d5b4382fa21032507c2c7a72b63026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:22:49.141747   11213 start.go:360] acquireMachinesLock for addons-244000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:22:49.141842   11213 start.go:364] duration metric: took 89µs to acquireMachinesLock for "addons-244000"
	I1010 11:22:49.141854   11213 start.go:93] Provisioning new machine with config: &{Name:addons-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:22:49.141881   11213 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:22:49.149098   11213 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1010 11:22:49.165667   11213 start.go:159] libmachine.API.Create for "addons-244000" (driver="qemu2")
	I1010 11:22:49.165695   11213 client.go:168] LocalClient.Create starting
	I1010 11:22:49.165890   11213 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:22:49.376414   11213 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:22:49.623560   11213 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:22:49.838427   11213 main.go:141] libmachine: Creating SSH key...
	I1010 11:22:49.925511   11213 main.go:141] libmachine: Creating Disk image...
	I1010 11:22:49.925520   11213 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:22:49.925718   11213 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2
	I1010 11:22:49.935582   11213 main.go:141] libmachine: STDOUT: 
	I1010 11:22:49.935599   11213 main.go:141] libmachine: STDERR: 
	I1010 11:22:49.935650   11213 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2 +20000M
	I1010 11:22:49.943954   11213 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:22:49.943968   11213 main.go:141] libmachine: STDERR: 
	I1010 11:22:49.943984   11213 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2
	I1010 11:22:49.943989   11213 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:22:49.944027   11213 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:22:49.944061   11213 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:80:e4:d0:37:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2
	I1010 11:22:49.945841   11213 main.go:141] libmachine: STDOUT: 
	I1010 11:22:49.945873   11213 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:22:49.945906   11213 client.go:171] duration metric: took 780.203459ms to LocalClient.Create
	I1010 11:22:51.948064   11213 start.go:128] duration metric: took 2.806189625s to createHost
	I1010 11:22:51.948120   11213 start.go:83] releasing machines lock for "addons-244000", held for 2.806295834s
	W1010 11:22:51.948172   11213 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:22:51.957437   11213 out.go:177] * Deleting "addons-244000" in qemu2 ...
	W1010 11:22:51.983684   11213 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:22:51.983723   11213 start.go:729] Will try again in 5 seconds ...
	I1010 11:22:56.985872   11213 start.go:360] acquireMachinesLock for addons-244000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:22:56.986376   11213 start.go:364] duration metric: took 401.5µs to acquireMachinesLock for "addons-244000"
	I1010 11:22:56.986467   11213 start.go:93] Provisioning new machine with config: &{Name:addons-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:22:56.986742   11213 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:22:56.996370   11213 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1010 11:22:57.043125   11213 start.go:159] libmachine.API.Create for "addons-244000" (driver="qemu2")
	I1010 11:22:57.043168   11213 client.go:168] LocalClient.Create starting
	I1010 11:22:57.043285   11213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:22:57.043349   11213 main.go:141] libmachine: Decoding PEM data...
	I1010 11:22:57.043370   11213 main.go:141] libmachine: Parsing certificate...
	I1010 11:22:57.043438   11213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:22:57.043495   11213 main.go:141] libmachine: Decoding PEM data...
	I1010 11:22:57.043505   11213 main.go:141] libmachine: Parsing certificate...
	I1010 11:22:57.044103   11213 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:22:57.199810   11213 main.go:141] libmachine: Creating SSH key...
	I1010 11:22:57.266461   11213 main.go:141] libmachine: Creating Disk image...
	I1010 11:22:57.266472   11213 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:22:57.266670   11213 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2
	I1010 11:22:57.276545   11213 main.go:141] libmachine: STDOUT: 
	I1010 11:22:57.276560   11213 main.go:141] libmachine: STDERR: 
	I1010 11:22:57.276642   11213 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2 +20000M
	I1010 11:22:57.284993   11213 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:22:57.285012   11213 main.go:141] libmachine: STDERR: 
	I1010 11:22:57.285029   11213 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2
	I1010 11:22:57.285034   11213 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:22:57.285043   11213 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:22:57.285078   11213 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:cc:ab:48:61:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/addons-244000/disk.qcow2
	I1010 11:22:57.286824   11213 main.go:141] libmachine: STDOUT: 
	I1010 11:22:57.286849   11213 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:22:57.286863   11213 client.go:171] duration metric: took 243.692375ms to LocalClient.Create
	I1010 11:22:59.289086   11213 start.go:128] duration metric: took 2.302320375s to createHost
	I1010 11:22:59.289191   11213 start.go:83] releasing machines lock for "addons-244000", held for 2.302816083s
	W1010 11:22:59.289695   11213 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:22:59.299244   11213 out.go:201] 
	W1010 11:22:59.303421   11213 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:22:59.303477   11213 out.go:270] * 
	* 
	W1010 11:22:59.305426   11213 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:22:59.316290   11213 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-244000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.31s)

                                                
                                    
x
+
TestCertOptions (10.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-371000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-371000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.854258417s)

                                                
                                                
-- stdout --
	* [cert-options-371000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-371000" primary control-plane node in "cert-options-371000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-371000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-371000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-371000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-371000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-371000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (91.057625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-371000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-371000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-371000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-371000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-371000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-371000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.514709ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-371000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-371000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-371000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-371000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-371000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-10 11:34:43.300042 -0700 PDT m=+742.101393626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-371000 -n cert-options-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-371000 -n cert-options-371000: exit status 7 (33.318709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-371000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-371000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-371000
--- FAIL: TestCertOptions (10.13s)

                                                
                                    
x
+
TestCertExpiration (195.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-986000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-986000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.791590667s)

                                                
                                                
-- stdout --
	* [cert-expiration-986000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-986000" primary control-plane node in "cert-expiration-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-986000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-986000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-986000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.250131792s)

                                                
                                                
-- stdout --
	* [cert-expiration-986000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-986000" primary control-plane node in "cert-expiration-986000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-986000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-986000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-986000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-986000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-986000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-986000" primary control-plane node in "cert-expiration-986000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-986000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-986000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-986000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-10 11:37:43.23383 -0700 PDT m=+922.036948251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-986000 -n cert-expiration-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-986000 -n cert-expiration-986000: exit status 7 (66.152833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-986000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-986000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-986000
--- FAIL: TestCertExpiration (195.19s)

                                                
                                    
x
+
TestDockerFlags (10.2s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-736000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-736000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.9464165s)

                                                
                                                
-- stdout --
	* [docker-flags-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-736000" primary control-plane node in "docker-flags-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:34:23.107658   12994 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:34:23.107797   12994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:34:23.107801   12994 out.go:358] Setting ErrFile to fd 2...
	I1010 11:34:23.107803   12994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:34:23.107919   12994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:34:23.109074   12994 out.go:352] Setting JSON to false
	I1010 11:34:23.126593   12994 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7434,"bootTime":1728577829,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:34:23.126667   12994 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:34:23.131769   12994 out.go:177] * [docker-flags-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:34:23.138841   12994 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:34:23.138867   12994 notify.go:220] Checking for updates...
	I1010 11:34:23.149844   12994 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:34:23.158808   12994 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:34:23.161823   12994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:34:23.165792   12994 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:34:23.173808   12994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:34:23.178137   12994 config.go:182] Loaded profile config "force-systemd-flag-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:34:23.178214   12994 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:34:23.178267   12994 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:34:23.181741   12994 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:34:23.188805   12994 start.go:297] selected driver: qemu2
	I1010 11:34:23.188811   12994 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:34:23.188817   12994 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:34:23.191478   12994 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:34:23.195826   12994 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:34:23.198855   12994 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1010 11:34:23.198872   12994 cni.go:84] Creating CNI manager for ""
	I1010 11:34:23.198902   12994 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:34:23.198910   12994 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:34:23.198953   12994 start.go:340] cluster config:
	{Name:docker-flags-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:34:23.203850   12994 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:34:23.211744   12994 out.go:177] * Starting "docker-flags-736000" primary control-plane node in "docker-flags-736000" cluster
	I1010 11:34:23.214931   12994 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:34:23.214961   12994 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:34:23.214975   12994 cache.go:56] Caching tarball of preloaded images
	I1010 11:34:23.215083   12994 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:34:23.215090   12994 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:34:23.215161   12994 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/docker-flags-736000/config.json ...
	I1010 11:34:23.215177   12994 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/docker-flags-736000/config.json: {Name:mkc8a51949d944e9e2bc173b06b987960fe79964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:34:23.215448   12994 start.go:360] acquireMachinesLock for docker-flags-736000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:34:23.215505   12994 start.go:364] duration metric: took 46.125µs to acquireMachinesLock for "docker-flags-736000"
	I1010 11:34:23.215519   12994 start.go:93] Provisioning new machine with config: &{Name:docker-flags-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:34:23.215555   12994 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:34:23.223850   12994 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1010 11:34:23.241794   12994 start.go:159] libmachine.API.Create for "docker-flags-736000" (driver="qemu2")
	I1010 11:34:23.241817   12994 client.go:168] LocalClient.Create starting
	I1010 11:34:23.241898   12994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:34:23.241937   12994 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:23.241950   12994 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:23.241989   12994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:34:23.242024   12994 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:23.242031   12994 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:23.242385   12994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:34:23.390816   12994 main.go:141] libmachine: Creating SSH key...
	I1010 11:34:23.530856   12994 main.go:141] libmachine: Creating Disk image...
	I1010 11:34:23.530863   12994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:34:23.531042   12994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2
	I1010 11:34:23.540926   12994 main.go:141] libmachine: STDOUT: 
	I1010 11:34:23.540948   12994 main.go:141] libmachine: STDERR: 
	I1010 11:34:23.541004   12994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2 +20000M
	I1010 11:34:23.549329   12994 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:34:23.549342   12994 main.go:141] libmachine: STDERR: 
	I1010 11:34:23.549362   12994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2
	I1010 11:34:23.549368   12994 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:34:23.549381   12994 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:34:23.549409   12994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:4e:78:5e:49:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2
	I1010 11:34:23.551232   12994 main.go:141] libmachine: STDOUT: 
	I1010 11:34:23.551246   12994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:34:23.551264   12994 client.go:171] duration metric: took 309.443666ms to LocalClient.Create
	I1010 11:34:25.553427   12994 start.go:128] duration metric: took 2.337873417s to createHost
	I1010 11:34:25.553492   12994 start.go:83] releasing machines lock for "docker-flags-736000", held for 2.33800025s
	W1010 11:34:25.553533   12994 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:25.572706   12994 out.go:177] * Deleting "docker-flags-736000" in qemu2 ...
	W1010 11:34:25.593263   12994 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:25.593281   12994 start.go:729] Will try again in 5 seconds ...
	I1010 11:34:30.595501   12994 start.go:360] acquireMachinesLock for docker-flags-736000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:34:30.620866   12994 start.go:364] duration metric: took 25.23025ms to acquireMachinesLock for "docker-flags-736000"
	I1010 11:34:30.620975   12994 start.go:93] Provisioning new machine with config: &{Name:docker-flags-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:34:30.621274   12994 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:34:30.630995   12994 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1010 11:34:30.678073   12994 start.go:159] libmachine.API.Create for "docker-flags-736000" (driver="qemu2")
	I1010 11:34:30.678136   12994 client.go:168] LocalClient.Create starting
	I1010 11:34:30.678258   12994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:34:30.678332   12994 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:30.678349   12994 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:30.678427   12994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:34:30.678485   12994 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:30.678499   12994 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:30.679110   12994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:34:30.841429   12994 main.go:141] libmachine: Creating SSH key...
	I1010 11:34:30.957738   12994 main.go:141] libmachine: Creating Disk image...
	I1010 11:34:30.957744   12994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:34:30.957921   12994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2
	I1010 11:34:30.967596   12994 main.go:141] libmachine: STDOUT: 
	I1010 11:34:30.967615   12994 main.go:141] libmachine: STDERR: 
	I1010 11:34:30.967672   12994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2 +20000M
	I1010 11:34:30.976135   12994 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:34:30.976155   12994 main.go:141] libmachine: STDERR: 
	I1010 11:34:30.976167   12994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2
	I1010 11:34:30.976172   12994 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:34:30.976179   12994 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:34:30.976211   12994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:9b:3a:20:c1:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/docker-flags-736000/disk.qcow2
	I1010 11:34:30.977975   12994 main.go:141] libmachine: STDOUT: 
	I1010 11:34:30.977987   12994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:34:30.977999   12994 client.go:171] duration metric: took 299.858958ms to LocalClient.Create
	I1010 11:34:32.980157   12994 start.go:128] duration metric: took 2.358872458s to createHost
	I1010 11:34:32.980227   12994 start.go:83] releasing machines lock for "docker-flags-736000", held for 2.359350167s
	W1010 11:34:32.980532   12994 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:32.991497   12994 out.go:201] 
	W1010 11:34:32.996611   12994 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:34:32.996649   12994 out.go:270] * 
	* 
	W1010 11:34:32.999266   12994 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:34:33.008291   12994 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-736000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-736000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-736000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (88.477792ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-736000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-736000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-736000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-736000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-736000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-736000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-736000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-736000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-736000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (48.885792ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-736000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-736000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-736000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-736000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-736000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-736000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-10 11:34:33.162727 -0700 PDT m=+731.963979085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-736000 -n docker-flags-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-736000 -n docker-flags-736000: exit status 7 (32.234166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-736000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-736000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-736000
--- FAIL: TestDockerFlags (10.20s)

                                                
                                    
x
+
TestForceSystemdFlag (10.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-473000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-473000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.037488333s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-473000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-473000" primary control-plane node in "force-systemd-flag-473000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-473000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:34:17.977483   12973 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:34:17.977617   12973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:34:17.977620   12973 out.go:358] Setting ErrFile to fd 2...
	I1010 11:34:17.977623   12973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:34:17.977743   12973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:34:17.978950   12973 out.go:352] Setting JSON to false
	I1010 11:34:17.997644   12973 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7428,"bootTime":1728577829,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:34:17.997709   12973 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:34:18.002838   12973 out.go:177] * [force-systemd-flag-473000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:34:18.010920   12973 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:34:18.010958   12973 notify.go:220] Checking for updates...
	I1010 11:34:18.017849   12973 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:34:18.020842   12973 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:34:18.023897   12973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:34:18.026867   12973 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:34:18.029841   12973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:34:18.033165   12973 config.go:182] Loaded profile config "force-systemd-env-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:34:18.033244   12973 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:34:18.033292   12973 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:34:18.037760   12973 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:34:18.044853   12973 start.go:297] selected driver: qemu2
	I1010 11:34:18.044860   12973 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:34:18.044866   12973 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:34:18.047362   12973 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:34:18.049818   12973 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:34:18.052903   12973 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 11:34:18.052915   12973 cni.go:84] Creating CNI manager for ""
	I1010 11:34:18.052935   12973 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:34:18.052939   12973 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:34:18.052973   12973 start.go:340] cluster config:
	{Name:force-systemd-flag-473000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:34:18.057441   12973 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:34:18.065833   12973 out.go:177] * Starting "force-systemd-flag-473000" primary control-plane node in "force-systemd-flag-473000" cluster
	I1010 11:34:18.069846   12973 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:34:18.069862   12973 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:34:18.069870   12973 cache.go:56] Caching tarball of preloaded images
	I1010 11:34:18.069940   12973 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:34:18.069946   12973 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:34:18.069996   12973 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/force-systemd-flag-473000/config.json ...
	I1010 11:34:18.070008   12973 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/force-systemd-flag-473000/config.json: {Name:mk0c62ddbf54fc089da6b2ac85bcfb0bbc944b22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:34:18.070260   12973 start.go:360] acquireMachinesLock for force-systemd-flag-473000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:34:18.070310   12973 start.go:364] duration metric: took 42.459µs to acquireMachinesLock for "force-systemd-flag-473000"
	I1010 11:34:18.070324   12973 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:34:18.070374   12973 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:34:18.077840   12973 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1010 11:34:18.094365   12973 start.go:159] libmachine.API.Create for "force-systemd-flag-473000" (driver="qemu2")
	I1010 11:34:18.094387   12973 client.go:168] LocalClient.Create starting
	I1010 11:34:18.094447   12973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:34:18.094486   12973 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:18.094498   12973 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:18.094536   12973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:34:18.094564   12973 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:18.094573   12973 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:18.094920   12973 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:34:18.247061   12973 main.go:141] libmachine: Creating SSH key...
	I1010 11:34:18.384286   12973 main.go:141] libmachine: Creating Disk image...
	I1010 11:34:18.384293   12973 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:34:18.384473   12973 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2
	I1010 11:34:18.394197   12973 main.go:141] libmachine: STDOUT: 
	I1010 11:34:18.394216   12973 main.go:141] libmachine: STDERR: 
	I1010 11:34:18.394270   12973 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2 +20000M
	I1010 11:34:18.402542   12973 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:34:18.402558   12973 main.go:141] libmachine: STDERR: 
	I1010 11:34:18.402580   12973 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2
	I1010 11:34:18.402585   12973 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:34:18.402599   12973 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:34:18.402628   12973 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:d2:31:3b:b9:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2
	I1010 11:34:18.404379   12973 main.go:141] libmachine: STDOUT: 
	I1010 11:34:18.404394   12973 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:34:18.404413   12973 client.go:171] duration metric: took 310.024041ms to LocalClient.Create
	I1010 11:34:20.406571   12973 start.go:128] duration metric: took 2.336202833s to createHost
	I1010 11:34:20.406631   12973 start.go:83] releasing machines lock for "force-systemd-flag-473000", held for 2.336333958s
	W1010 11:34:20.406740   12973 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:20.417851   12973 out.go:177] * Deleting "force-systemd-flag-473000" in qemu2 ...
	W1010 11:34:20.441751   12973 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:20.441782   12973 start.go:729] Will try again in 5 seconds ...
	I1010 11:34:25.443961   12973 start.go:360] acquireMachinesLock for force-systemd-flag-473000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:34:25.553673   12973 start.go:364] duration metric: took 109.546375ms to acquireMachinesLock for "force-systemd-flag-473000"
	I1010 11:34:25.553779   12973 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:34:25.553994   12973 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:34:25.563694   12973 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1010 11:34:25.611508   12973 start.go:159] libmachine.API.Create for "force-systemd-flag-473000" (driver="qemu2")
	I1010 11:34:25.611575   12973 client.go:168] LocalClient.Create starting
	I1010 11:34:25.611687   12973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:34:25.611763   12973 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:25.611780   12973 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:25.611849   12973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:34:25.611911   12973 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:25.611924   12973 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:25.612526   12973 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:34:25.774217   12973 main.go:141] libmachine: Creating SSH key...
	I1010 11:34:25.923130   12973 main.go:141] libmachine: Creating Disk image...
	I1010 11:34:25.923138   12973 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:34:25.923349   12973 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2
	I1010 11:34:25.933438   12973 main.go:141] libmachine: STDOUT: 
	I1010 11:34:25.933454   12973 main.go:141] libmachine: STDERR: 
	I1010 11:34:25.933522   12973 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2 +20000M
	I1010 11:34:25.941908   12973 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:34:25.941926   12973 main.go:141] libmachine: STDERR: 
	I1010 11:34:25.941939   12973 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2
	I1010 11:34:25.941945   12973 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:34:25.941954   12973 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:34:25.941982   12973 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:ad:89:82:f2:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-flag-473000/disk.qcow2
	I1010 11:34:25.943686   12973 main.go:141] libmachine: STDOUT: 
	I1010 11:34:25.943700   12973 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:34:25.943714   12973 client.go:171] duration metric: took 332.137583ms to LocalClient.Create
	I1010 11:34:27.945898   12973 start.go:128] duration metric: took 2.391893708s to createHost
	I1010 11:34:27.945988   12973 start.go:83] releasing machines lock for "force-systemd-flag-473000", held for 2.392295s
	W1010 11:34:27.946395   12973 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-473000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-473000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:27.955066   12973 out.go:201] 
	W1010 11:34:27.959223   12973 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:34:27.959254   12973 out.go:270] * 
	* 
	W1010 11:34:27.961613   12973 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:34:27.970116   12973 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-473000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-473000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-473000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (89.206333ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-473000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-473000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-473000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-10 11:34:28.077292 -0700 PDT m=+726.878493918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-473000 -n force-systemd-flag-473000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-473000 -n force-systemd-flag-473000: exit status 7 (37.226792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-473000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-473000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-473000
--- FAIL: TestForceSystemdFlag (10.24s)

                                                
                                    
x
+
TestForceSystemdEnv (10.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-849000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1010 11:34:14.529924   11135 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1010 11:34:14.529943   11135 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1010 11:34:14.529988   11135 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1010 11:34:14.530015   11135 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/002/docker-machine-driver-hyperkit
I1010 11:34:14.922904   11135 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x109a22400 0x109a22400 0x109a22400 0x109a22400 0x109a22400 0x109a22400 0x109a22400] Decompressors:map[bz2:0x14000681110 gz:0x14000681118 tar:0x14000680c60 tar.bz2:0x14000680c70 tar.gz:0x14000680d20 tar.xz:0x14000680d40 tar.zst:0x14000680d50 tbz2:0x14000680c70 tgz:0x14000680d20 txz:0x14000680d40 tzst:0x14000680d50 xz:0x14000681130 zip:0x14000681150 zst:0x14000681138] Getters:map[file:0x140015e9020 http:0x140004f4aa0 https:0x140004f4af0] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1010 11:34:14.923026   11135 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/002/docker-machine-driver-hyperkit
I1010 11:34:17.891465   11135 install.go:79] stdout: 
W1010 11:34:17.891638   11135 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1010 11:34:17.891662   11135 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/002/docker-machine-driver-hyperkit]
I1010 11:34:17.909247   11135 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/002/docker-machine-driver-hyperkit]
I1010 11:34:17.924480   11135 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/002/docker-machine-driver-hyperkit]
I1010 11:34:17.935914   11135 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-849000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.930047708s)

                                                
                                                
-- stdout --
	* [force-systemd-env-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-849000" primary control-plane node in "force-systemd-env-849000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-849000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:34:12.977188   12953 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:34:12.977358   12953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:34:12.977361   12953 out.go:358] Setting ErrFile to fd 2...
	I1010 11:34:12.977363   12953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:34:12.977487   12953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:34:12.978654   12953 out.go:352] Setting JSON to false
	I1010 11:34:12.997019   12953 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7423,"bootTime":1728577829,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:34:12.997088   12953 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:34:13.002056   12953 out.go:177] * [force-systemd-env-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:34:13.009010   12953 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:34:13.009111   12953 notify.go:220] Checking for updates...
	I1010 11:34:13.016005   12953 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:34:13.018989   12953 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:34:13.025976   12953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:34:13.033015   12953 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:34:13.041004   12953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1010 11:34:13.044352   12953 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:34:13.044399   12953 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:34:13.048015   12953 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:34:13.054979   12953 start.go:297] selected driver: qemu2
	I1010 11:34:13.054985   12953 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:34:13.054989   12953 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:34:13.057441   12953 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:34:13.060012   12953 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:34:13.063132   12953 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 11:34:13.063143   12953 cni.go:84] Creating CNI manager for ""
	I1010 11:34:13.063162   12953 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:34:13.063166   12953 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:34:13.063195   12953 start.go:340] cluster config:
	{Name:force-systemd-env-849000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:34:13.067273   12953 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:34:13.075022   12953 out.go:177] * Starting "force-systemd-env-849000" primary control-plane node in "force-systemd-env-849000" cluster
	I1010 11:34:13.078980   12953 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:34:13.079004   12953 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:34:13.079011   12953 cache.go:56] Caching tarball of preloaded images
	I1010 11:34:13.079093   12953 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:34:13.079098   12953 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:34:13.079155   12953 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/force-systemd-env-849000/config.json ...
	I1010 11:34:13.079166   12953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/force-systemd-env-849000/config.json: {Name:mk574ebd0e6fb56f26964071a44a83c60b828424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:34:13.079432   12953 start.go:360] acquireMachinesLock for force-systemd-env-849000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:34:13.079479   12953 start.go:364] duration metric: took 38.667µs to acquireMachinesLock for "force-systemd-env-849000"
	I1010 11:34:13.079491   12953 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:34:13.079513   12953 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:34:13.083866   12953 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1010 11:34:13.098537   12953 start.go:159] libmachine.API.Create for "force-systemd-env-849000" (driver="qemu2")
	I1010 11:34:13.098567   12953 client.go:168] LocalClient.Create starting
	I1010 11:34:13.098645   12953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:34:13.098680   12953 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:13.098691   12953 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:13.098729   12953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:34:13.098758   12953 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:13.098764   12953 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:13.099080   12953 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:34:13.246915   12953 main.go:141] libmachine: Creating SSH key...
	I1010 11:34:13.408221   12953 main.go:141] libmachine: Creating Disk image...
	I1010 11:34:13.408231   12953 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:34:13.408431   12953 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2
	I1010 11:34:13.418437   12953 main.go:141] libmachine: STDOUT: 
	I1010 11:34:13.418455   12953 main.go:141] libmachine: STDERR: 
	I1010 11:34:13.418514   12953 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2 +20000M
	I1010 11:34:13.427158   12953 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:34:13.427172   12953 main.go:141] libmachine: STDERR: 
	I1010 11:34:13.427189   12953 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2
	I1010 11:34:13.427196   12953 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:34:13.427211   12953 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:34:13.427239   12953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d0:6c:f6:51:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2
	I1010 11:34:13.429042   12953 main.go:141] libmachine: STDOUT: 
	I1010 11:34:13.429056   12953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:34:13.429080   12953 client.go:171] duration metric: took 330.50725ms to LocalClient.Create
	I1010 11:34:15.431259   12953 start.go:128] duration metric: took 2.351739542s to createHost
	I1010 11:34:15.431321   12953 start.go:83] releasing machines lock for "force-systemd-env-849000", held for 2.351856333s
	W1010 11:34:15.431373   12953 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:15.438590   12953 out.go:177] * Deleting "force-systemd-env-849000" in qemu2 ...
	W1010 11:34:15.463645   12953 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:15.463684   12953 start.go:729] Will try again in 5 seconds ...
	I1010 11:34:20.465792   12953 start.go:360] acquireMachinesLock for force-systemd-env-849000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:34:20.466193   12953 start.go:364] duration metric: took 336.666µs to acquireMachinesLock for "force-systemd-env-849000"
	I1010 11:34:20.466325   12953 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:34:20.466531   12953 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:34:20.475973   12953 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1010 11:34:20.521332   12953 start.go:159] libmachine.API.Create for "force-systemd-env-849000" (driver="qemu2")
	I1010 11:34:20.521392   12953 client.go:168] LocalClient.Create starting
	I1010 11:34:20.521523   12953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:34:20.521604   12953 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:20.521624   12953 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:20.521686   12953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:34:20.521745   12953 main.go:141] libmachine: Decoding PEM data...
	I1010 11:34:20.521761   12953 main.go:141] libmachine: Parsing certificate...
	I1010 11:34:20.522257   12953 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:34:20.680398   12953 main.go:141] libmachine: Creating SSH key...
	I1010 11:34:20.810510   12953 main.go:141] libmachine: Creating Disk image...
	I1010 11:34:20.810517   12953 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:34:20.810691   12953 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2
	I1010 11:34:20.820354   12953 main.go:141] libmachine: STDOUT: 
	I1010 11:34:20.820374   12953 main.go:141] libmachine: STDERR: 
	I1010 11:34:20.820437   12953 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2 +20000M
	I1010 11:34:20.828782   12953 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:34:20.828798   12953 main.go:141] libmachine: STDERR: 
	I1010 11:34:20.828811   12953 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2
	I1010 11:34:20.828816   12953 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:34:20.828827   12953 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:34:20.828863   12953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:9b:f2:66:99:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/force-systemd-env-849000/disk.qcow2
	I1010 11:34:20.830554   12953 main.go:141] libmachine: STDOUT: 
	I1010 11:34:20.830574   12953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:34:20.830586   12953 client.go:171] duration metric: took 309.191167ms to LocalClient.Create
	I1010 11:34:22.832736   12953 start.go:128] duration metric: took 2.366205166s to createHost
	I1010 11:34:22.832793   12953 start.go:83] releasing machines lock for "force-systemd-env-849000", held for 2.36660275s
	W1010 11:34:22.833230   12953 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-849000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-849000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:34:22.842865   12953 out.go:201] 
	W1010 11:34:22.845855   12953 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:34:22.845887   12953 out.go:270] * 
	* 
	W1010 11:34:22.848260   12953 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:34:22.858848   12953 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-849000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-849000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-849000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (84.211167ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-849000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-849000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-849000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-10 11:34:22.964931 -0700 PDT m=+721.766082751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-849000 -n force-systemd-env-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-849000 -n force-systemd-env-849000: exit status 7 (35.770458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-849000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-849000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-849000
--- FAIL: TestForceSystemdEnv (10.13s)

                                                
                                    
x
+
TestErrorSpam/setup (9.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-462000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-462000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 --driver=qemu2 : exit status 80 (9.8737455s)

                                                
                                                
-- stdout --
	* [nospam-462000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-462000" primary control-plane node in "nospam-462000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-462000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-462000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-462000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-462000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-462000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19787
- KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-462000" primary control-plane node in "nospam-462000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-462000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-462000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.88s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-444000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-444000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.875056083s)

                                                
                                                
-- stdout --
	* [functional-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-444000" primary control-plane node in "functional-444000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-444000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53151 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53151 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53151 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-444000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-444000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19787
- KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-444000" primary control-plane node in "functional-444000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-444000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:53151 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:53151 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:53151 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-444000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (72.760583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.95s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1010 11:23:27.444328   11135 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-444000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-444000 --alsologtostderr -v=8: exit status 80 (5.189176042s)

                                                
                                                
-- stdout --
	* [functional-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-444000" primary control-plane node in "functional-444000" cluster
	* Restarting existing qemu2 VM for "functional-444000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-444000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:23:27.477411   11343 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:23:27.477557   11343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:23:27.477560   11343 out.go:358] Setting ErrFile to fd 2...
	I1010 11:23:27.477563   11343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:23:27.477689   11343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:23:27.478794   11343 out.go:352] Setting JSON to false
	I1010 11:23:27.496188   11343 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6778,"bootTime":1728577829,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:23:27.496256   11343 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:23:27.500755   11343 out.go:177] * [functional-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:23:27.508755   11343 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:23:27.508815   11343 notify.go:220] Checking for updates...
	I1010 11:23:27.515732   11343 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:23:27.519509   11343 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:23:27.522727   11343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:23:27.525764   11343 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:23:27.528729   11343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:23:27.532056   11343 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:23:27.532101   11343 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:23:27.536707   11343 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:23:27.541752   11343 start.go:297] selected driver: qemu2
	I1010 11:23:27.541757   11343 start.go:901] validating driver "qemu2" against &{Name:functional-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:23:27.541802   11343 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:23:27.544286   11343 cni.go:84] Creating CNI manager for ""
	I1010 11:23:27.544322   11343 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:23:27.544373   11343 start.go:340] cluster config:
	{Name:functional-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-444000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:23:27.548900   11343 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:23:27.552808   11343 out.go:177] * Starting "functional-444000" primary control-plane node in "functional-444000" cluster
	I1010 11:23:27.556765   11343 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:23:27.556789   11343 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:23:27.556798   11343 cache.go:56] Caching tarball of preloaded images
	I1010 11:23:27.556880   11343 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:23:27.556886   11343 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:23:27.556951   11343 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/functional-444000/config.json ...
	I1010 11:23:27.557648   11343 start.go:360] acquireMachinesLock for functional-444000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:23:27.557680   11343 start.go:364] duration metric: took 26.291µs to acquireMachinesLock for "functional-444000"
	I1010 11:23:27.557691   11343 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:23:27.557694   11343 fix.go:54] fixHost starting: 
	I1010 11:23:27.557816   11343 fix.go:112] recreateIfNeeded on functional-444000: state=Stopped err=<nil>
	W1010 11:23:27.557826   11343 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:23:27.562731   11343 out.go:177] * Restarting existing qemu2 VM for "functional-444000" ...
	I1010 11:23:27.569627   11343 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:23:27.569663   11343 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:91:18:fc:49:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/disk.qcow2
	I1010 11:23:27.571901   11343 main.go:141] libmachine: STDOUT: 
	I1010 11:23:27.571926   11343 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:23:27.571958   11343 fix.go:56] duration metric: took 14.261209ms for fixHost
	I1010 11:23:27.571963   11343 start.go:83] releasing machines lock for "functional-444000", held for 14.278041ms
	W1010 11:23:27.571969   11343 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:23:27.572006   11343 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:23:27.572011   11343 start.go:729] Will try again in 5 seconds ...
	I1010 11:23:32.574257   11343 start.go:360] acquireMachinesLock for functional-444000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:23:32.574617   11343 start.go:364] duration metric: took 275.709µs to acquireMachinesLock for "functional-444000"
	I1010 11:23:32.574728   11343 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:23:32.574747   11343 fix.go:54] fixHost starting: 
	I1010 11:23:32.575436   11343 fix.go:112] recreateIfNeeded on functional-444000: state=Stopped err=<nil>
	W1010 11:23:32.575466   11343 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:23:32.579876   11343 out.go:177] * Restarting existing qemu2 VM for "functional-444000" ...
	I1010 11:23:32.583947   11343 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:23:32.584241   11343 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:91:18:fc:49:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/disk.qcow2
	I1010 11:23:32.594255   11343 main.go:141] libmachine: STDOUT: 
	I1010 11:23:32.594344   11343 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:23:32.594429   11343 fix.go:56] duration metric: took 19.677708ms for fixHost
	I1010 11:23:32.594456   11343 start.go:83] releasing machines lock for "functional-444000", held for 19.814583ms
	W1010 11:23:32.594686   11343 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-444000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-444000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:23:32.603825   11343 out.go:201] 
	W1010 11:23:32.607940   11343 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:23:32.607972   11343 out.go:270] * 
	* 
	W1010 11:23:32.610759   11343 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:23:32.617807   11343 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-444000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.190793334s for "functional-444000" cluster.
I1010 11:23:32.635411   11135 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (72.6955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.528542ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-444000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (33.694042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-444000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-444000 get po -A: exit status 1 (26.148166ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-444000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-444000\n"*: args "kubectl --context functional-444000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-444000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (34.133292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh sudo crictl images: exit status 83 (51.9825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-444000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (43.748333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-444000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (47.052459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (48.976459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-444000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 kubectl -- --context functional-444000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 kubectl -- --context functional-444000 get pods: exit status 1 (2.086900791s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-444000
	* no server found for cluster "functional-444000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-444000 kubectl -- --context functional-444000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (35.445291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-444000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-444000 get pods: exit status 1 (1.153803084s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-444000
	* no server found for cluster "functional-444000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-444000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (33.620542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.19s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-444000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-444000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.193727542s)

                                                
                                                
-- stdout --
	* [functional-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-444000" primary control-plane node in "functional-444000" cluster
	* Restarting existing qemu2 VM for "functional-444000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-444000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-444000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-444000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.194668958s for "functional-444000" cluster.
I1010 11:23:44.607119   11135 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (71.989375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-444000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-444000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.251625ms)

                                                
                                                
** stderr ** 
	error: context "functional-444000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-444000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (33.417666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 logs: exit status 83 (80.657125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
	|         | -p download-only-370000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
	| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
	| start   | -o=json --download-only                                                  | download-only-988000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
	|         | -p download-only-988000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
	| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
	| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
	| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
	| start   | --download-only -p                                                       | binary-mirror-999000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
	|         | binary-mirror-999000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:53123                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-999000                                                  | binary-mirror-999000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
	| addons  | enable dashboard -p                                                      | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
	|         | addons-244000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
	|         | addons-244000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-244000 --wait=true                                             | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-244000                                                         | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
	| start   | -p nospam-462000 -n=1 --memory=2250 --wait=false                         | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-462000                                                         | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	| start   | -p functional-444000                                                     | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-444000                                                     | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | minikube-local-cache-test:functional-444000                              |                      |         |         |                     |                     |
	| cache   | functional-444000 cache delete                                           | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | minikube-local-cache-test:functional-444000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	| ssh     | functional-444000 ssh sudo                                               | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-444000                                                        | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-444000 ssh                                                    | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-444000 cache reload                                           | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	| ssh     | functional-444000 ssh                                                    | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-444000 kubectl --                                             | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | --context functional-444000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-444000                                                     | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 11:23:39
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 11:23:39.442955   11418 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:23:39.443113   11418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:23:39.443115   11418 out.go:358] Setting ErrFile to fd 2...
	I1010 11:23:39.443117   11418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:23:39.443238   11418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:23:39.444307   11418 out.go:352] Setting JSON to false
	I1010 11:23:39.461637   11418 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6790,"bootTime":1728577829,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:23:39.461706   11418 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:23:39.467056   11418 out.go:177] * [functional-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:23:39.475009   11418 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:23:39.475040   11418 notify.go:220] Checking for updates...
	I1010 11:23:39.483996   11418 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:23:39.487977   11418 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:23:39.491067   11418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:23:39.494055   11418 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:23:39.497015   11418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:23:39.500314   11418 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:23:39.500373   11418 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:23:39.505008   11418 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:23:39.512027   11418 start.go:297] selected driver: qemu2
	I1010 11:23:39.512030   11418 start.go:901] validating driver "qemu2" against &{Name:functional-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:23:39.512072   11418 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:23:39.514630   11418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:23:39.514723   11418 cni.go:84] Creating CNI manager for ""
	I1010 11:23:39.514746   11418 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:23:39.514796   11418 start.go:340] cluster config:
	{Name:functional-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-444000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:23:39.519232   11418 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:23:39.525984   11418 out.go:177] * Starting "functional-444000" primary control-plane node in "functional-444000" cluster
	I1010 11:23:39.530077   11418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:23:39.530089   11418 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:23:39.530097   11418 cache.go:56] Caching tarball of preloaded images
	I1010 11:23:39.530171   11418 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:23:39.530176   11418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:23:39.530221   11418 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/functional-444000/config.json ...
	I1010 11:23:39.530699   11418 start.go:360] acquireMachinesLock for functional-444000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:23:39.530748   11418 start.go:364] duration metric: took 44.083µs to acquireMachinesLock for "functional-444000"
	I1010 11:23:39.530756   11418 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:23:39.530759   11418 fix.go:54] fixHost starting: 
	I1010 11:23:39.530883   11418 fix.go:112] recreateIfNeeded on functional-444000: state=Stopped err=<nil>
	W1010 11:23:39.530892   11418 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:23:39.539074   11418 out.go:177] * Restarting existing qemu2 VM for "functional-444000" ...
	I1010 11:23:39.543000   11418 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:23:39.543040   11418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:91:18:fc:49:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/disk.qcow2
	I1010 11:23:39.545171   11418 main.go:141] libmachine: STDOUT: 
	I1010 11:23:39.545185   11418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:23:39.545221   11418 fix.go:56] duration metric: took 14.460667ms for fixHost
	I1010 11:23:39.545223   11418 start.go:83] releasing machines lock for "functional-444000", held for 14.472584ms
	W1010 11:23:39.545228   11418 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:23:39.545257   11418 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:23:39.545261   11418 start.go:729] Will try again in 5 seconds ...
	I1010 11:23:44.547464   11418 start.go:360] acquireMachinesLock for functional-444000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:23:44.547859   11418 start.go:364] duration metric: took 281.459µs to acquireMachinesLock for "functional-444000"
	I1010 11:23:44.547975   11418 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:23:44.547985   11418 fix.go:54] fixHost starting: 
	I1010 11:23:44.548602   11418 fix.go:112] recreateIfNeeded on functional-444000: state=Stopped err=<nil>
	W1010 11:23:44.548619   11418 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:23:44.553196   11418 out.go:177] * Restarting existing qemu2 VM for "functional-444000" ...
	I1010 11:23:44.558121   11418 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:23:44.558368   11418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:91:18:fc:49:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/disk.qcow2
	I1010 11:23:44.568627   11418 main.go:141] libmachine: STDOUT: 
	I1010 11:23:44.568707   11418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:23:44.568804   11418 fix.go:56] duration metric: took 20.81875ms for fixHost
	I1010 11:23:44.568820   11418 start.go:83] releasing machines lock for "functional-444000", held for 20.944ms
	W1010 11:23:44.569064   11418 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-444000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:23:44.578120   11418 out.go:201] 
	W1010 11:23:44.581091   11418 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:23:44.581120   11418 out.go:270] * 
	W1010 11:23:44.583741   11418 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:23:44.593157   11418 out.go:201] 
	
	
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-444000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | -p download-only-370000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| start   | -o=json --download-only                                                  | download-only-988000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | -p download-only-988000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| start   | --download-only -p                                                       | binary-mirror-999000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | binary-mirror-999000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:53123                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-999000                                                  | binary-mirror-999000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| addons  | enable dashboard -p                                                      | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | addons-244000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | addons-244000                                                            |                      |         |         |                     |                     |
| start   | -p addons-244000 --wait=true                                             | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-244000                                                         | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| start   | -p nospam-462000 -n=1 --memory=2250 --wait=false                         | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-462000                                                         | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
| start   | -p functional-444000                                                     | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-444000                                                     | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | minikube-local-cache-test:functional-444000                              |                      |         |         |                     |                     |
| cache   | functional-444000 cache delete                                           | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | minikube-local-cache-test:functional-444000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
| ssh     | functional-444000 ssh sudo                                               | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-444000                                                        | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-444000 ssh                                                    | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-444000 cache reload                                           | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
| ssh     | functional-444000 ssh                                                    | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-444000 kubectl --                                             | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | --context functional-444000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-444000                                                     | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/10 11:23:39
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1010 11:23:39.442955   11418 out.go:345] Setting OutFile to fd 1 ...
I1010 11:23:39.443113   11418 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:23:39.443115   11418 out.go:358] Setting ErrFile to fd 2...
I1010 11:23:39.443117   11418 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:23:39.443238   11418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:23:39.444307   11418 out.go:352] Setting JSON to false
I1010 11:23:39.461637   11418 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6790,"bootTime":1728577829,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1010 11:23:39.461706   11418 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1010 11:23:39.467056   11418 out.go:177] * [functional-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1010 11:23:39.475009   11418 out.go:177]   - MINIKUBE_LOCATION=19787
I1010 11:23:39.475040   11418 notify.go:220] Checking for updates...
I1010 11:23:39.483996   11418 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
I1010 11:23:39.487977   11418 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1010 11:23:39.491067   11418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1010 11:23:39.494055   11418 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
I1010 11:23:39.497015   11418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1010 11:23:39.500314   11418 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:23:39.500373   11418 driver.go:394] Setting default libvirt URI to qemu:///system
I1010 11:23:39.505008   11418 out.go:177] * Using the qemu2 driver based on existing profile
I1010 11:23:39.512027   11418 start.go:297] selected driver: qemu2
I1010 11:23:39.512030   11418 start.go:901] validating driver "qemu2" against &{Name:functional-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1010 11:23:39.512072   11418 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1010 11:23:39.514630   11418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1010 11:23:39.514723   11418 cni.go:84] Creating CNI manager for ""
I1010 11:23:39.514746   11418 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1010 11:23:39.514796   11418 start.go:340] cluster config:
{Name:functional-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-444000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1010 11:23:39.519232   11418 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1010 11:23:39.525984   11418 out.go:177] * Starting "functional-444000" primary control-plane node in "functional-444000" cluster
I1010 11:23:39.530077   11418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1010 11:23:39.530089   11418 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1010 11:23:39.530097   11418 cache.go:56] Caching tarball of preloaded images
I1010 11:23:39.530171   11418 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1010 11:23:39.530176   11418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1010 11:23:39.530221   11418 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/functional-444000/config.json ...
I1010 11:23:39.530699   11418 start.go:360] acquireMachinesLock for functional-444000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1010 11:23:39.530748   11418 start.go:364] duration metric: took 44.083µs to acquireMachinesLock for "functional-444000"
I1010 11:23:39.530756   11418 start.go:96] Skipping create...Using existing machine configuration
I1010 11:23:39.530759   11418 fix.go:54] fixHost starting: 
I1010 11:23:39.530883   11418 fix.go:112] recreateIfNeeded on functional-444000: state=Stopped err=<nil>
W1010 11:23:39.530892   11418 fix.go:138] unexpected machine state, will restart: <nil>
I1010 11:23:39.539074   11418 out.go:177] * Restarting existing qemu2 VM for "functional-444000" ...
I1010 11:23:39.543000   11418 qemu.go:418] Using hvf for hardware acceleration
I1010 11:23:39.543040   11418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:91:18:fc:49:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/disk.qcow2
I1010 11:23:39.545171   11418 main.go:141] libmachine: STDOUT: 
I1010 11:23:39.545185   11418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1010 11:23:39.545221   11418 fix.go:56] duration metric: took 14.460667ms for fixHost
I1010 11:23:39.545223   11418 start.go:83] releasing machines lock for "functional-444000", held for 14.472584ms
W1010 11:23:39.545228   11418 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1010 11:23:39.545257   11418 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1010 11:23:39.545261   11418 start.go:729] Will try again in 5 seconds ...
I1010 11:23:44.547464   11418 start.go:360] acquireMachinesLock for functional-444000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1010 11:23:44.547859   11418 start.go:364] duration metric: took 281.459µs to acquireMachinesLock for "functional-444000"
I1010 11:23:44.547975   11418 start.go:96] Skipping create...Using existing machine configuration
I1010 11:23:44.547985   11418 fix.go:54] fixHost starting: 
I1010 11:23:44.548602   11418 fix.go:112] recreateIfNeeded on functional-444000: state=Stopped err=<nil>
W1010 11:23:44.548619   11418 fix.go:138] unexpected machine state, will restart: <nil>
I1010 11:23:44.553196   11418 out.go:177] * Restarting existing qemu2 VM for "functional-444000" ...
I1010 11:23:44.558121   11418 qemu.go:418] Using hvf for hardware acceleration
I1010 11:23:44.558368   11418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:91:18:fc:49:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/disk.qcow2
I1010 11:23:44.568627   11418 main.go:141] libmachine: STDOUT: 
I1010 11:23:44.568707   11418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1010 11:23:44.568804   11418 fix.go:56] duration metric: took 20.81875ms for fixHost
I1010 11:23:44.568820   11418 start.go:83] releasing machines lock for "functional-444000", held for 20.944ms
W1010 11:23:44.569064   11418 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-444000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1010 11:23:44.578120   11418 out.go:201] 
W1010 11:23:44.581091   11418 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1010 11:23:44.581120   11418 out.go:270] * 
W1010 11:23:44.583741   11418 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1010 11:23:44.593157   11418 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3252112844/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | -p download-only-370000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| start   | -o=json --download-only                                                  | download-only-988000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | -p download-only-988000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| delete  | -p download-only-370000                                                  | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| start   | --download-only -p                                                       | binary-mirror-999000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | binary-mirror-999000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:53123                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-999000                                                  | binary-mirror-999000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| addons  | enable dashboard -p                                                      | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | addons-244000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | addons-244000                                                            |                      |         |         |                     |                     |
| start   | -p addons-244000 --wait=true                                             | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-244000                                                         | addons-244000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
| start   | -p nospam-462000 -n=1 --memory=2250 --wait=false                         | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-462000 --log_dir                                                  | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-462000                                                         | nospam-462000        | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
| start   | -p functional-444000                                                     | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-444000                                                     | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-444000 cache add                                              | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | minikube-local-cache-test:functional-444000                              |                      |         |         |                     |                     |
| cache   | functional-444000 cache delete                                           | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | minikube-local-cache-test:functional-444000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
| ssh     | functional-444000 ssh sudo                                               | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-444000                                                        | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-444000 ssh                                                    | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-444000 cache reload                                           | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
| ssh     | functional-444000 ssh                                                    | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT | 10 Oct 24 11:23 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-444000 kubectl --                                             | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | --context functional-444000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-444000                                                     | functional-444000    | jenkins | v1.34.0 | 10 Oct 24 11:23 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/10 11:23:39
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1010 11:23:39.442955   11418 out.go:345] Setting OutFile to fd 1 ...
I1010 11:23:39.443113   11418 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:23:39.443115   11418 out.go:358] Setting ErrFile to fd 2...
I1010 11:23:39.443117   11418 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:23:39.443238   11418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:23:39.444307   11418 out.go:352] Setting JSON to false
I1010 11:23:39.461637   11418 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6790,"bootTime":1728577829,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1010 11:23:39.461706   11418 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1010 11:23:39.467056   11418 out.go:177] * [functional-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1010 11:23:39.475009   11418 out.go:177]   - MINIKUBE_LOCATION=19787
I1010 11:23:39.475040   11418 notify.go:220] Checking for updates...
I1010 11:23:39.483996   11418 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
I1010 11:23:39.487977   11418 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1010 11:23:39.491067   11418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1010 11:23:39.494055   11418 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
I1010 11:23:39.497015   11418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1010 11:23:39.500314   11418 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:23:39.500373   11418 driver.go:394] Setting default libvirt URI to qemu:///system
I1010 11:23:39.505008   11418 out.go:177] * Using the qemu2 driver based on existing profile
I1010 11:23:39.512027   11418 start.go:297] selected driver: qemu2
I1010 11:23:39.512030   11418 start.go:901] validating driver "qemu2" against &{Name:functional-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1010 11:23:39.512072   11418 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1010 11:23:39.514630   11418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1010 11:23:39.514723   11418 cni.go:84] Creating CNI manager for ""
I1010 11:23:39.514746   11418 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1010 11:23:39.514796   11418 start.go:340] cluster config:
{Name:functional-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-444000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1010 11:23:39.519232   11418 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1010 11:23:39.525984   11418 out.go:177] * Starting "functional-444000" primary control-plane node in "functional-444000" cluster
I1010 11:23:39.530077   11418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1010 11:23:39.530089   11418 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1010 11:23:39.530097   11418 cache.go:56] Caching tarball of preloaded images
I1010 11:23:39.530171   11418 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1010 11:23:39.530176   11418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1010 11:23:39.530221   11418 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/functional-444000/config.json ...
I1010 11:23:39.530699   11418 start.go:360] acquireMachinesLock for functional-444000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1010 11:23:39.530748   11418 start.go:364] duration metric: took 44.083µs to acquireMachinesLock for "functional-444000"
I1010 11:23:39.530756   11418 start.go:96] Skipping create...Using existing machine configuration
I1010 11:23:39.530759   11418 fix.go:54] fixHost starting: 
I1010 11:23:39.530883   11418 fix.go:112] recreateIfNeeded on functional-444000: state=Stopped err=<nil>
W1010 11:23:39.530892   11418 fix.go:138] unexpected machine state, will restart: <nil>
I1010 11:23:39.539074   11418 out.go:177] * Restarting existing qemu2 VM for "functional-444000" ...
I1010 11:23:39.543000   11418 qemu.go:418] Using hvf for hardware acceleration
I1010 11:23:39.543040   11418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:91:18:fc:49:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/disk.qcow2
I1010 11:23:39.545171   11418 main.go:141] libmachine: STDOUT: 
I1010 11:23:39.545185   11418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1010 11:23:39.545221   11418 fix.go:56] duration metric: took 14.460667ms for fixHost
I1010 11:23:39.545223   11418 start.go:83] releasing machines lock for "functional-444000", held for 14.472584ms
W1010 11:23:39.545228   11418 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1010 11:23:39.545257   11418 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1010 11:23:39.545261   11418 start.go:729] Will try again in 5 seconds ...
I1010 11:23:44.547464   11418 start.go:360] acquireMachinesLock for functional-444000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1010 11:23:44.547859   11418 start.go:364] duration metric: took 281.459µs to acquireMachinesLock for "functional-444000"
I1010 11:23:44.547975   11418 start.go:96] Skipping create...Using existing machine configuration
I1010 11:23:44.547985   11418 fix.go:54] fixHost starting: 
I1010 11:23:44.548602   11418 fix.go:112] recreateIfNeeded on functional-444000: state=Stopped err=<nil>
W1010 11:23:44.548619   11418 fix.go:138] unexpected machine state, will restart: <nil>
I1010 11:23:44.553196   11418 out.go:177] * Restarting existing qemu2 VM for "functional-444000" ...
I1010 11:23:44.558121   11418 qemu.go:418] Using hvf for hardware acceleration
I1010 11:23:44.558368   11418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:91:18:fc:49:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/functional-444000/disk.qcow2
I1010 11:23:44.568627   11418 main.go:141] libmachine: STDOUT: 
I1010 11:23:44.568707   11418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1010 11:23:44.568804   11418 fix.go:56] duration metric: took 20.81875ms for fixHost
I1010 11:23:44.568820   11418 start.go:83] releasing machines lock for "functional-444000", held for 20.944ms
W1010 11:23:44.569064   11418 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-444000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1010 11:23:44.578120   11418 out.go:201] 
W1010 11:23:44.581091   11418 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1010 11:23:44.581120   11418 out.go:270] * 
W1010 11:23:44.583741   11418 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1010 11:23:44.593157   11418 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-444000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-444000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.668791ms)

                                                
                                                
** stderr ** 
	error: context "functional-444000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-444000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-444000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-444000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-444000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-444000 --alsologtostderr -v=1] stderr:
I1010 11:24:27.330789   11731 out.go:345] Setting OutFile to fd 1 ...
I1010 11:24:27.331220   11731 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.331223   11731 out.go:358] Setting ErrFile to fd 2...
I1010 11:24:27.331226   11731 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.331357   11731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:24:27.331543   11731 mustload.go:65] Loading cluster: functional-444000
I1010 11:24:27.331760   11731 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:24:27.335028   11731 out.go:177] * The control-plane node functional-444000 host is not running: state=Stopped
I1010 11:24:27.337977   11731 out.go:177]   To start a cluster, run: "minikube start -p functional-444000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (45.539542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 status: exit status 7 (33.413708ms)

                                                
                                                
-- stdout --
	functional-444000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-444000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.666625ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-444000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 status -o json: exit status 7 (33.526958ms)

                                                
                                                
-- stdout --
	{"Name":"functional-444000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-444000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (33.38225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-444000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-444000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.807917ms)

                                                
                                                
** stderr ** 
	error: context "functional-444000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-444000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-444000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-444000 describe po hello-node-connect: exit status 1 (26.362542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-444000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-444000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-444000 logs -l app=hello-node-connect: exit status 1 (26.1655ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-444000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-444000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-444000 describe svc hello-node-connect: exit status 1 (26.289167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-444000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (34.404208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-444000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (34.516375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "echo hello": exit status 83 (47.853792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-444000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"\n"*. args "out/minikube-darwin-arm64 -p functional-444000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "cat /etc/hostname": exit status 83 (47.802917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-444000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-444000"- but got *"* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"\n"*. args "out/minikube-darwin-arm64 -p functional-444000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (34.949875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (56.541333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-444000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh -n functional-444000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh -n functional-444000 "sudo cat /home/docker/cp-test.txt": exit status 83 (47.079333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-444000 ssh -n functional-444000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-444000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-444000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 cp functional-444000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3295400657/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 cp functional-444000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3295400657/001/cp-test.txt: exit status 83 (46.349375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-444000 cp functional-444000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3295400657/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh -n functional-444000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh -n functional-444000 "sudo cat /home/docker/cp-test.txt": exit status 83 (55.669125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-444000 ssh -n functional-444000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3295400657/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (53.995917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-444000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh -n functional-444000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh -n functional-444000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (44.825459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-444000 ssh -n functional-444000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-444000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-444000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11135/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/test/nested/copy/11135/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/test/nested/copy/11135/hosts": exit status 83 (46.424042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/test/nested/copy/11135/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-444000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-444000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (34.588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11135.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/ssl/certs/11135.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/ssl/certs/11135.pem": exit status 83 (45.865917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/11135.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-444000 ssh \"sudo cat /etc/ssl/certs/11135.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/11135.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-444000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-444000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11135.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /usr/share/ca-certificates/11135.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /usr/share/ca-certificates/11135.pem": exit status 83 (45.687333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/11135.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-444000 ssh \"sudo cat /usr/share/ca-certificates/11135.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/11135.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-444000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-444000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (45.506542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-444000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-444000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-444000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/111352.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/ssl/certs/111352.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/ssl/certs/111352.pem": exit status 83 (49.594542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/111352.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-444000 ssh \"sudo cat /etc/ssl/certs/111352.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/111352.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-444000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-444000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/111352.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /usr/share/ca-certificates/111352.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /usr/share/ca-certificates/111352.pem": exit status 83 (49.609291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/111352.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-444000 ssh \"sudo cat /usr/share/ca-certificates/111352.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/111352.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-444000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-444000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (43.802459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-444000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-444000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-444000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (34.892542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-444000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-444000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.33025ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-444000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-444000 -n functional-444000: exit status 7 (34.002167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo systemctl is-active crio": exit status 83 (50.534833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 version -o=json --components: exit status 83 (46.98775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-444000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-444000 image ls --format short --alsologtostderr:
I1010 11:24:27.806573   11748 out.go:345] Setting OutFile to fd 1 ...
I1010 11:24:27.806746   11748 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.806749   11748 out.go:358] Setting ErrFile to fd 2...
I1010 11:24:27.806751   11748 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.806881   11748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:24:27.807342   11748 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:24:27.807406   11748 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-444000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-444000 image ls --format table --alsologtostderr:
I1010 11:24:27.885562   11752 out.go:345] Setting OutFile to fd 1 ...
I1010 11:24:27.885723   11752 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.885726   11752 out.go:358] Setting ErrFile to fd 2...
I1010 11:24:27.885729   11752 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.885858   11752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:24:27.886246   11752 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:24:27.886305   11752 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-444000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-444000 image ls --format json --alsologtostderr:
I1010 11:24:27.845637   11750 out.go:345] Setting OutFile to fd 1 ...
I1010 11:24:27.845801   11750 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.845804   11750 out.go:358] Setting ErrFile to fd 2...
I1010 11:24:27.845807   11750 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.845940   11750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:24:27.846331   11750 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:24:27.846391   11750 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-444000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-444000 image ls --format yaml --alsologtostderr:
I1010 11:24:27.767171   11746 out.go:345] Setting OutFile to fd 1 ...
I1010 11:24:27.767362   11746 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.767366   11746 out.go:358] Setting ErrFile to fd 2...
I1010 11:24:27.767368   11746 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.767480   11746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:24:27.767875   11746 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:24:27.767933   11746 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh pgrep buildkitd: exit status 83 (43.792542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image build -t localhost/my-image:functional-444000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-444000 image build -t localhost/my-image:functional-444000 testdata/build --alsologtostderr:
I1010 11:24:27.967895   11756 out.go:345] Setting OutFile to fd 1 ...
I1010 11:24:27.968570   11756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.968573   11756 out.go:358] Setting ErrFile to fd 2...
I1010 11:24:27.968576   11756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:24:27.968701   11756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:24:27.969095   11756 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:24:27.969521   11756 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:24:27.969746   11756 build_images.go:133] succeeded building to: 
I1010 11:24:27.969749   11756 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image ls
functional_test.go:446: expected "localhost/my-image:functional-444000" to be loaded into minikube but the image is not there
I1010 11:24:33.381832   11135 retry.go:31] will retry after 21.322116311s: Temporary Error: Get "http:": http: no Host in request URL
I1010 11:24:54.706156   11135 retry.go:31] will retry after 39.447215785s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-444000 docker-env) && out/minikube-darwin-arm64 status -p functional-444000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-444000 docker-env) && out/minikube-darwin-arm64 status -p functional-444000": exit status 1 (53.871667ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 update-context --alsologtostderr -v=2: exit status 83 (49.16175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:24:27.623361   11740 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:24:27.624045   11740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:24:27.624048   11740 out.go:358] Setting ErrFile to fd 2...
	I1010 11:24:27.624051   11740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:24:27.624163   11740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:24:27.624358   11740 mustload.go:65] Loading cluster: functional-444000
	I1010 11:24:27.624555   11740 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:24:27.628533   11740 out.go:177] * The control-plane node functional-444000 host is not running: state=Stopped
	I1010 11:24:27.636406   11740 out.go:177]   To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-444000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 update-context --alsologtostderr -v=2: exit status 83 (45.475333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:24:27.720132   11744 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:24:27.720294   11744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:24:27.720297   11744 out.go:358] Setting ErrFile to fd 2...
	I1010 11:24:27.720299   11744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:24:27.720424   11744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:24:27.720631   11744 mustload.go:65] Loading cluster: functional-444000
	I1010 11:24:27.720820   11744 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:24:27.725355   11744 out.go:177] * The control-plane node functional-444000 host is not running: state=Stopped
	I1010 11:24:27.729495   11744 out.go:177]   To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-444000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 update-context --alsologtostderr -v=2: exit status 83 (46.4705ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:24:27.673580   11742 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:24:27.673758   11742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:24:27.673762   11742 out.go:358] Setting ErrFile to fd 2...
	I1010 11:24:27.673764   11742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:24:27.673880   11742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:24:27.674092   11742 mustload.go:65] Loading cluster: functional-444000
	I1010 11:24:27.674278   11742 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:24:27.679427   11742 out.go:177] * The control-plane node functional-444000 host is not running: state=Stopped
	I1010 11:24:27.683414   11742 out.go:177]   To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-444000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-444000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-444000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.223334ms)

                                                
                                                
** stderr ** 
	error: context "functional-444000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-444000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 service list: exit status 83 (47.3615ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-444000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 service list -o json: exit status 83 (50.790084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-444000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 service --namespace=default --https --url hello-node: exit status 83 (46.67375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-444000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 service hello-node --url --format={{.IP}}: exit status 83 (47.731334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-444000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 service hello-node --url: exit status 83 (46.829792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-444000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test.go:1569: failed to parse "* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"": parse "* The control-plane node functional-444000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-444000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-444000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-444000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1010 11:23:46.532130   11535 out.go:345] Setting OutFile to fd 1 ...
I1010 11:23:46.532376   11535 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:23:46.532378   11535 out.go:358] Setting ErrFile to fd 2...
I1010 11:23:46.532381   11535 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:23:46.532510   11535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:23:46.532739   11535 mustload.go:65] Loading cluster: functional-444000
I1010 11:23:46.532973   11535 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:23:46.537825   11535 out.go:177] * The control-plane node functional-444000 host is not running: state=Stopped
I1010 11:23:46.549761   11535 out.go:177]   To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
stdout: * The control-plane node functional-444000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-444000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-444000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 11536: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-444000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-444000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-444000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-444000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-444000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-444000": client config: context "functional-444000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1010 11:23:46.613693   11135 retry.go:31] will retry after 3.135078458s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-444000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-444000 get svc nginx-svc: exit status 1 (68.819708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-444000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-444000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image load --daemon kicbase/echo-server:functional-444000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-444000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image load --daemon kicbase/echo-server:functional-444000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-444000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-444000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image load --daemon kicbase/echo-server:functional-444000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-444000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image save kicbase/echo-server:functional-444000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-444000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1010 11:25:34.240407   11135 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.030715042s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 10 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1010 11:25:59.381883   11135 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:26:09.384440   11135 retry.go:31] will retry after 4.390470828s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1010 11:26:23.779645   11135 retry.go:31] will retry after 3.857648486s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:65329->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-740000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-740000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.80148625s)

                                                
                                                
-- stdout --
	* [ha-740000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-740000" primary control-plane node in "ha-740000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-740000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:26:29.785260   11790 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:26:29.785418   11790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:26:29.785421   11790 out.go:358] Setting ErrFile to fd 2...
	I1010 11:26:29.785423   11790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:26:29.785564   11790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:26:29.786717   11790 out.go:352] Setting JSON to false
	I1010 11:26:29.804316   11790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6960,"bootTime":1728577829,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:26:29.804386   11790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:26:29.808918   11790 out.go:177] * [ha-740000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:26:29.816824   11790 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:26:29.816861   11790 notify.go:220] Checking for updates...
	I1010 11:26:29.822750   11790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:26:29.825803   11790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:26:29.828819   11790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:26:29.831765   11790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:26:29.834805   11790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:26:29.837997   11790 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:26:29.841796   11790 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:26:29.848908   11790 start.go:297] selected driver: qemu2
	I1010 11:26:29.848916   11790 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:26:29.848923   11790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:26:29.851421   11790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:26:29.855814   11790 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:26:29.858886   11790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:26:29.858900   11790 cni.go:84] Creating CNI manager for ""
	I1010 11:26:29.858919   11790 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1010 11:26:29.858926   11790 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 11:26:29.858959   11790 start.go:340] cluster config:
	{Name:ha-740000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:26:29.863631   11790 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:26:29.870819   11790 out.go:177] * Starting "ha-740000" primary control-plane node in "ha-740000" cluster
	I1010 11:26:29.874810   11790 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:26:29.874825   11790 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:26:29.874832   11790 cache.go:56] Caching tarball of preloaded images
	I1010 11:26:29.874913   11790 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:26:29.874920   11790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:26:29.875136   11790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/ha-740000/config.json ...
	I1010 11:26:29.875148   11790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/ha-740000/config.json: {Name:mk28bfe1f6e329aec5d4ae90d7c80876f965ce5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:26:29.875412   11790 start.go:360] acquireMachinesLock for ha-740000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:26:29.875464   11790 start.go:364] duration metric: took 46.208µs to acquireMachinesLock for "ha-740000"
	I1010 11:26:29.875480   11790 start.go:93] Provisioning new machine with config: &{Name:ha-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:26:29.875514   11790 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:26:29.883745   11790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:26:29.900902   11790 start.go:159] libmachine.API.Create for "ha-740000" (driver="qemu2")
	I1010 11:26:29.900927   11790 client.go:168] LocalClient.Create starting
	I1010 11:26:29.901001   11790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:26:29.901039   11790 main.go:141] libmachine: Decoding PEM data...
	I1010 11:26:29.901052   11790 main.go:141] libmachine: Parsing certificate...
	I1010 11:26:29.901098   11790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:26:29.901129   11790 main.go:141] libmachine: Decoding PEM data...
	I1010 11:26:29.901140   11790 main.go:141] libmachine: Parsing certificate...
	I1010 11:26:29.901493   11790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:26:30.042623   11790 main.go:141] libmachine: Creating SSH key...
	I1010 11:26:30.155633   11790 main.go:141] libmachine: Creating Disk image...
	I1010 11:26:30.155639   11790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:26:30.155847   11790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2
	I1010 11:26:30.165597   11790 main.go:141] libmachine: STDOUT: 
	I1010 11:26:30.165620   11790 main.go:141] libmachine: STDERR: 
	I1010 11:26:30.165670   11790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2 +20000M
	I1010 11:26:30.174070   11790 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:26:30.174086   11790 main.go:141] libmachine: STDERR: 
	I1010 11:26:30.174110   11790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2
	I1010 11:26:30.174116   11790 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:26:30.174127   11790 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:26:30.174164   11790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:bf:75:16:87:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2
	I1010 11:26:30.175978   11790 main.go:141] libmachine: STDOUT: 
	I1010 11:26:30.175993   11790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:26:30.176014   11790 client.go:171] duration metric: took 275.082292ms to LocalClient.Create
	I1010 11:26:32.178209   11790 start.go:128] duration metric: took 2.302686583s to createHost
	I1010 11:26:32.178316   11790 start.go:83] releasing machines lock for "ha-740000", held for 2.302812209s
	W1010 11:26:32.178390   11790 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:26:32.189570   11790 out.go:177] * Deleting "ha-740000" in qemu2 ...
	W1010 11:26:32.214920   11790 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:26:32.215019   11790 start.go:729] Will try again in 5 seconds ...
	I1010 11:26:37.217157   11790 start.go:360] acquireMachinesLock for ha-740000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:26:37.217651   11790 start.go:364] duration metric: took 414.417µs to acquireMachinesLock for "ha-740000"
	I1010 11:26:37.217763   11790 start.go:93] Provisioning new machine with config: &{Name:ha-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:26:37.218270   11790 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:26:37.228015   11790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:26:37.275804   11790 start.go:159] libmachine.API.Create for "ha-740000" (driver="qemu2")
	I1010 11:26:37.275846   11790 client.go:168] LocalClient.Create starting
	I1010 11:26:37.275966   11790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:26:37.276043   11790 main.go:141] libmachine: Decoding PEM data...
	I1010 11:26:37.276062   11790 main.go:141] libmachine: Parsing certificate...
	I1010 11:26:37.276123   11790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:26:37.276180   11790 main.go:141] libmachine: Decoding PEM data...
	I1010 11:26:37.276194   11790 main.go:141] libmachine: Parsing certificate...
	I1010 11:26:37.276757   11790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:26:37.439781   11790 main.go:141] libmachine: Creating SSH key...
	I1010 11:26:37.489785   11790 main.go:141] libmachine: Creating Disk image...
	I1010 11:26:37.489791   11790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:26:37.489989   11790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2
	I1010 11:26:37.499666   11790 main.go:141] libmachine: STDOUT: 
	I1010 11:26:37.499689   11790 main.go:141] libmachine: STDERR: 
	I1010 11:26:37.499743   11790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2 +20000M
	I1010 11:26:37.508246   11790 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:26:37.508269   11790 main.go:141] libmachine: STDERR: 
	I1010 11:26:37.508280   11790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2
	I1010 11:26:37.508285   11790 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:26:37.508292   11790 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:26:37.508323   11790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:95:63:61:db:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2
	I1010 11:26:37.510143   11790 main.go:141] libmachine: STDOUT: 
	I1010 11:26:37.510157   11790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:26:37.510169   11790 client.go:171] duration metric: took 234.3195ms to LocalClient.Create
	I1010 11:26:39.512336   11790 start.go:128] duration metric: took 2.294051208s to createHost
	I1010 11:26:39.512404   11790 start.go:83] releasing machines lock for "ha-740000", held for 2.29474175s
	W1010 11:26:39.512750   11790 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-740000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-740000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:26:39.523509   11790 out.go:201] 
	W1010 11:26:39.527465   11790 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:26:39.527508   11790 out.go:270] * 
	* 
	W1010 11:26:39.530249   11790 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:26:39.540429   11790 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-740000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (72.370541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (104.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (63.348833ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-740000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- rollout status deployment/busybox: exit status 1 (60.607959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (60.220708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:26:39.813374   11135 retry.go:31] will retry after 709.526768ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.059708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:26:40.631290   11135 retry.go:31] will retry after 1.197341272s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.134916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:26:41.937097   11135 retry.go:31] will retry after 1.789663241s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.221375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:26:43.835310   11135 retry.go:31] will retry after 1.703347364s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.373834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:26:45.648402   11135 retry.go:31] will retry after 2.874218537s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.001958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:26:48.630933   11135 retry.go:31] will retry after 6.092724181s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.49ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:26:54.833421   11135 retry.go:31] will retry after 16.173345498s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.96925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:27:11.116690   11135 retry.go:31] will retry after 19.380147438s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.910916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:27:30.607155   11135 retry.go:31] will retry after 18.000240119s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.964917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:27:48.717639   11135 retry.go:31] will retry after 34.674165682s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.348209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (60.11725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- exec  -- nslookup kubernetes.io: exit status 1 (60.044333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.968625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.854458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.877417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (104.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-740000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.270458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-740000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.476542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-740000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-740000 -v=7 --alsologtostderr: exit status 83 (45.638834ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-740000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-740000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:23.902631   11874 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:23.902990   11874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:23.902994   11874 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:23.902996   11874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:23.903117   11874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:23.903330   11874 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:23.903541   11874 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:23.907703   11874 out.go:177] * The control-plane node ha-740000 host is not running: state=Stopped
	I1010 11:28:23.911571   11874 out.go:177]   To start a cluster, run: "minikube start -p ha-740000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-740000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.695458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-740000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-740000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.8835ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-740000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-740000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-740000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-740000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-740000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-740000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-740000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-740000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-740000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-740000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-740000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.139334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status --output json -v=7 --alsologtostderr: exit status 7 (33.543709ms)

                                                
                                                
-- stdout --
	{"Name":"ha-740000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:24.126125   11886 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:24.126283   11886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:24.126286   11886 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:24.126288   11886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:24.126418   11886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:24.126534   11886 out.go:352] Setting JSON to true
	I1010 11:28:24.126545   11886 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:24.126600   11886 notify.go:220] Checking for updates...
	I1010 11:28:24.126748   11886 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:24.126757   11886 status.go:174] checking status of ha-740000 ...
	I1010 11:28:24.127001   11886 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:28:24.127004   11886 status.go:384] host is not running, skipping remaining checks
	I1010 11:28:24.127006   11886 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-740000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (32.924042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 node stop m02 -v=7 --alsologtostderr: exit status 85 (50.196584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:24.192757   11890 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:24.193266   11890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:24.193270   11890 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:24.193272   11890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:24.193391   11890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:24.193624   11890 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:24.193822   11890 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:24.197842   11890 out.go:201] 
	W1010 11:28:24.201932   11890 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1010 11:28:24.201938   11890 out.go:270] * 
	* 
	W1010 11:28:24.203771   11890 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:28:24.207891   11890 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-740000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (33.391041ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:24.243449   11892 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:24.243624   11892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:24.243627   11892 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:24.243629   11892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:24.243752   11892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:24.243867   11892 out.go:352] Setting JSON to false
	I1010 11:28:24.243877   11892 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:24.243933   11892 notify.go:220] Checking for updates...
	I1010 11:28:24.244061   11892 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:24.244069   11892 status.go:174] checking status of ha-740000 ...
	I1010 11:28:24.244316   11892 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:28:24.244320   11892 status.go:384] host is not running, skipping remaining checks
	I1010 11:28:24.244322   11892 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr": ha-740000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr": ha-740000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr": ha-740000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr": ha-740000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.433834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-740000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-740000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-740000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-740000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.068625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (51.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.554375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:24.395779   11901 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:24.396137   11901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:24.396140   11901 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:24.396143   11901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:24.396260   11901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:24.396514   11901 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:24.396702   11901 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:24.400861   11901 out.go:201] 
	W1010 11:28:24.403939   11901 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1010 11:28:24.403944   11901 out.go:270] * 
	* 
	W1010 11:28:24.405720   11901 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:28:24.409812   11901 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1010 11:28:24.395779   11901 out.go:345] Setting OutFile to fd 1 ...
I1010 11:28:24.396137   11901 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:28:24.396140   11901 out.go:358] Setting ErrFile to fd 2...
I1010 11:28:24.396143   11901 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:28:24.396260   11901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:28:24.396514   11901 mustload.go:65] Loading cluster: ha-740000
I1010 11:28:24.396702   11901 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:28:24.400861   11901 out.go:201] 
W1010 11:28:24.403939   11901 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1010 11:28:24.403944   11901 out.go:270] * 
* 
W1010 11:28:24.405720   11901 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1010 11:28:24.409812   11901 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-740000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (33.47275ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:24.446485   11903 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:24.446645   11903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:24.446649   11903 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:24.446653   11903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:24.446760   11903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:24.446875   11903 out.go:352] Setting JSON to false
	I1010 11:28:24.446887   11903 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:24.446951   11903 notify.go:220] Checking for updates...
	I1010 11:28:24.447091   11903 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:24.447100   11903 status.go:174] checking status of ha-740000 ...
	I1010 11:28:24.447353   11903 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:28:24.447356   11903 status.go:384] host is not running, skipping remaining checks
	I1010 11:28:24.447358   11903 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:28:24.448284   11135 retry.go:31] will retry after 1.232008756s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (78.695334ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:25.759010   11905 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:25.759230   11905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:25.759235   11905 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:25.759239   11905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:25.759440   11905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:25.759600   11905 out.go:352] Setting JSON to false
	I1010 11:28:25.759616   11905 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:25.759663   11905 notify.go:220] Checking for updates...
	I1010 11:28:25.759919   11905 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:25.759930   11905 status.go:174] checking status of ha-740000 ...
	I1010 11:28:25.760288   11905 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:28:25.760294   11905 status.go:384] host is not running, skipping remaining checks
	I1010 11:28:25.760297   11905 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:28:25.761363   11135 retry.go:31] will retry after 2.084214645s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (76.5525ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:27.922399   11907 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:27.922593   11907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:27.922597   11907 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:27.922599   11907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:27.922778   11907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:27.922929   11907 out.go:352] Setting JSON to false
	I1010 11:28:27.922943   11907 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:27.922980   11907 notify.go:220] Checking for updates...
	I1010 11:28:27.923188   11907 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:27.923199   11907 status.go:174] checking status of ha-740000 ...
	I1010 11:28:27.923512   11907 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:28:27.923517   11907 status.go:384] host is not running, skipping remaining checks
	I1010 11:28:27.923519   11907 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:28:27.924516   11135 retry.go:31] will retry after 1.412380488s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (78.233792ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:29.415319   11909 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:29.415535   11909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:29.415539   11909 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:29.415542   11909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:29.415706   11909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:29.415874   11909 out.go:352] Setting JSON to false
	I1010 11:28:29.415889   11909 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:29.415935   11909 notify.go:220] Checking for updates...
	I1010 11:28:29.416153   11909 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:29.416163   11909 status.go:174] checking status of ha-740000 ...
	I1010 11:28:29.416505   11909 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:28:29.416510   11909 status.go:384] host is not running, skipping remaining checks
	I1010 11:28:29.416512   11909 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:28:29.417564   11135 retry.go:31] will retry after 4.174128876s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (77.694708ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:33.669608   11911 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:33.669823   11911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:33.669827   11911 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:33.669831   11911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:33.670006   11911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:33.670159   11911 out.go:352] Setting JSON to false
	I1010 11:28:33.670173   11911 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:33.670211   11911 notify.go:220] Checking for updates...
	I1010 11:28:33.670474   11911 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:33.670485   11911 status.go:174] checking status of ha-740000 ...
	I1010 11:28:33.670777   11911 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:28:33.670782   11911 status.go:384] host is not running, skipping remaining checks
	I1010 11:28:33.670785   11911 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:28:33.671794   11135 retry.go:31] will retry after 6.834672701s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (78.478459ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:40.585093   11913 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:40.585277   11913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:40.585281   11913 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:40.585284   11913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:40.585443   11913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:40.585596   11913 out.go:352] Setting JSON to false
	I1010 11:28:40.585611   11913 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:40.585641   11913 notify.go:220] Checking for updates...
	I1010 11:28:40.585870   11913 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:40.585881   11913 status.go:174] checking status of ha-740000 ...
	I1010 11:28:40.586195   11913 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:28:40.586199   11913 status.go:384] host is not running, skipping remaining checks
	I1010 11:28:40.586202   11913 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:28:40.587246   11135 retry.go:31] will retry after 5.196608972s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (75.840292ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:28:45.859900   11916 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:28:45.860114   11916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:45.860118   11916 out.go:358] Setting ErrFile to fd 2...
	I1010 11:28:45.860121   11916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:28:45.860296   11916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:28:45.860465   11916 out.go:352] Setting JSON to false
	I1010 11:28:45.860483   11916 mustload.go:65] Loading cluster: ha-740000
	I1010 11:28:45.860514   11916 notify.go:220] Checking for updates...
	I1010 11:28:45.860746   11916 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:28:45.860756   11916 status.go:174] checking status of ha-740000 ...
	I1010 11:28:45.861063   11916 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:28:45.861067   11916 status.go:384] host is not running, skipping remaining checks
	I1010 11:28:45.861070   11916 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:28:45.862071   11135 retry.go:31] will retry after 16.558286401s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (77.168208ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:29:02.497732   11921 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:29:02.497935   11921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:02.497939   11921 out.go:358] Setting ErrFile to fd 2...
	I1010 11:29:02.497943   11921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:02.498099   11921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:29:02.498251   11921 out.go:352] Setting JSON to false
	I1010 11:29:02.498267   11921 mustload.go:65] Loading cluster: ha-740000
	I1010 11:29:02.498298   11921 notify.go:220] Checking for updates...
	I1010 11:29:02.498526   11921 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:29:02.498536   11921 status.go:174] checking status of ha-740000 ...
	I1010 11:29:02.498846   11921 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:29:02.498851   11921 status.go:384] host is not running, skipping remaining checks
	I1010 11:29:02.498854   11921 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:29:02.499950   11135 retry.go:31] will retry after 13.304051323s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (75.103417ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:29:15.879306   11923 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:29:15.879513   11923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:15.879518   11923 out.go:358] Setting ErrFile to fd 2...
	I1010 11:29:15.879521   11923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:15.879681   11923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:29:15.879830   11923 out.go:352] Setting JSON to false
	I1010 11:29:15.879845   11923 mustload.go:65] Loading cluster: ha-740000
	I1010 11:29:15.879887   11923 notify.go:220] Checking for updates...
	I1010 11:29:15.880106   11923 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:29:15.880117   11923 status.go:174] checking status of ha-740000 ...
	I1010 11:29:15.880435   11923 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:29:15.880440   11923 status.go:384] host is not running, skipping remaining checks
	I1010 11:29:15.880442   11923 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (36.368875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (51.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-740000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-740000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-740000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-740000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-740000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-740000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-740000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-740000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.734458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-740000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-740000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-740000 -v=7 --alsologtostderr: (3.659321792s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-740000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-740000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.256819125s)

                                                
                                                
-- stdout --
	* [ha-740000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-740000" primary control-plane node in "ha-740000" cluster
	* Restarting existing qemu2 VM for "ha-740000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-740000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:29:19.767506   11952 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:29:19.767691   11952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:19.767695   11952 out.go:358] Setting ErrFile to fd 2...
	I1010 11:29:19.767699   11952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:19.767862   11952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:29:19.769186   11952 out.go:352] Setting JSON to false
	I1010 11:29:19.789297   11952 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7130,"bootTime":1728577829,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:29:19.789358   11952 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:29:19.794829   11952 out.go:177] * [ha-740000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:29:19.801864   11952 notify.go:220] Checking for updates...
	I1010 11:29:19.805840   11952 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:29:19.813817   11952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:29:19.821899   11952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:29:19.829756   11952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:29:19.837719   11952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:29:19.841786   11952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:29:19.845134   11952 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:29:19.845187   11952 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:29:19.849747   11952 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:29:19.856838   11952 start.go:297] selected driver: qemu2
	I1010 11:29:19.856843   11952 start.go:901] validating driver "qemu2" against &{Name:ha-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:29:19.856890   11952 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:29:19.859749   11952 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:29:19.859798   11952 cni.go:84] Creating CNI manager for ""
	I1010 11:29:19.859829   11952 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 11:29:19.859910   11952 start.go:340] cluster config:
	{Name:ha-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-740000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:29:19.864985   11952 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:29:19.872789   11952 out.go:177] * Starting "ha-740000" primary control-plane node in "ha-740000" cluster
	I1010 11:29:19.876681   11952 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:29:19.876698   11952 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:29:19.876707   11952 cache.go:56] Caching tarball of preloaded images
	I1010 11:29:19.876793   11952 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:29:19.876799   11952 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:29:19.876862   11952 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/ha-740000/config.json ...
	I1010 11:29:19.877333   11952 start.go:360] acquireMachinesLock for ha-740000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:29:19.877396   11952 start.go:364] duration metric: took 56.042µs to acquireMachinesLock for "ha-740000"
	I1010 11:29:19.877408   11952 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:29:19.877412   11952 fix.go:54] fixHost starting: 
	I1010 11:29:19.877550   11952 fix.go:112] recreateIfNeeded on ha-740000: state=Stopped err=<nil>
	W1010 11:29:19.877561   11952 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:29:19.884791   11952 out.go:177] * Restarting existing qemu2 VM for "ha-740000" ...
	I1010 11:29:19.888774   11952 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:29:19.888814   11952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:95:63:61:db:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2
	I1010 11:29:19.891219   11952 main.go:141] libmachine: STDOUT: 
	I1010 11:29:19.891241   11952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:29:19.891283   11952 fix.go:56] duration metric: took 13.869333ms for fixHost
	I1010 11:29:19.891289   11952 start.go:83] releasing machines lock for "ha-740000", held for 13.887875ms
	W1010 11:29:19.891296   11952 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:29:19.891334   11952 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:29:19.891339   11952 start.go:729] Will try again in 5 seconds ...
	I1010 11:29:24.893478   11952 start.go:360] acquireMachinesLock for ha-740000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:29:24.893898   11952 start.go:364] duration metric: took 337.375µs to acquireMachinesLock for "ha-740000"
	I1010 11:29:24.894009   11952 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:29:24.894032   11952 fix.go:54] fixHost starting: 
	I1010 11:29:24.894745   11952 fix.go:112] recreateIfNeeded on ha-740000: state=Stopped err=<nil>
	W1010 11:29:24.894770   11952 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:29:24.899989   11952 out.go:177] * Restarting existing qemu2 VM for "ha-740000" ...
	I1010 11:29:24.907799   11952 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:29:24.908006   11952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:95:63:61:db:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2
	I1010 11:29:24.918075   11952 main.go:141] libmachine: STDOUT: 
	I1010 11:29:24.918176   11952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:29:24.918268   11952 fix.go:56] duration metric: took 24.233834ms for fixHost
	I1010 11:29:24.918296   11952 start.go:83] releasing machines lock for "ha-740000", held for 24.370167ms
	W1010 11:29:24.918556   11952 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-740000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-740000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:29:24.926947   11952 out.go:201] 
	W1010 11:29:24.931002   11952 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:29:24.931040   11952 out.go:270] * 
	* 
	W1010 11:29:24.933699   11952 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:29:24.940694   11952 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-740000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-740000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (35.702917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 node delete m03 -v=7 --alsologtostderr: exit status 83 (51.568167ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-740000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-740000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:29:25.094477   11964 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:29:25.094883   11964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:25.094887   11964 out.go:358] Setting ErrFile to fd 2...
	I1010 11:29:25.094890   11964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:25.095025   11964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:29:25.095212   11964 mustload.go:65] Loading cluster: ha-740000
	I1010 11:29:25.095437   11964 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:29:25.102108   11964 out.go:177] * The control-plane node ha-740000 host is not running: state=Stopped
	I1010 11:29:25.110006   11964 out.go:177]   To start a cluster, run: "minikube start -p ha-740000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-740000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (33.973292ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:29:25.147049   11966 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:29:25.147204   11966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:25.147207   11966 out.go:358] Setting ErrFile to fd 2...
	I1010 11:29:25.147209   11966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:25.147324   11966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:29:25.147445   11966 out.go:352] Setting JSON to false
	I1010 11:29:25.147457   11966 mustload.go:65] Loading cluster: ha-740000
	I1010 11:29:25.147512   11966 notify.go:220] Checking for updates...
	I1010 11:29:25.147667   11966 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:29:25.147678   11966 status.go:174] checking status of ha-740000 ...
	I1010 11:29:25.147904   11966 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:29:25.147908   11966 status.go:384] host is not running, skipping remaining checks
	I1010 11:29:25.147910   11966 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.6125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-740000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-740000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-740000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-740000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (32.89075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-740000 stop -v=7 --alsologtostderr: (3.241450334s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr: exit status 7 (69.470917ms)

                                                
                                                
-- stdout --
	ha-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:29:28.576113   11993 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:29:28.576302   11993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:28.576306   11993 out.go:358] Setting ErrFile to fd 2...
	I1010 11:29:28.576308   11993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:28.576467   11993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:29:28.576619   11993 out.go:352] Setting JSON to false
	I1010 11:29:28.576633   11993 mustload.go:65] Loading cluster: ha-740000
	I1010 11:29:28.576671   11993 notify.go:220] Checking for updates...
	I1010 11:29:28.576883   11993 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:29:28.576893   11993 status.go:174] checking status of ha-740000 ...
	I1010 11:29:28.577201   11993 status.go:371] ha-740000 host status = "Stopped" (err=<nil>)
	I1010 11:29:28.577205   11993 status.go:384] host is not running, skipping remaining checks
	I1010 11:29:28.577208   11993 status.go:176] ha-740000 status: &{Name:ha-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr": ha-740000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr": ha-740000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-740000 status -v=7 --alsologtostderr": ha-740000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (34.62425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-740000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-740000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.191765917s)

                                                
                                                
-- stdout --
	* [ha-740000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-740000" primary control-plane node in "ha-740000" cluster
	* Restarting existing qemu2 VM for "ha-740000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-740000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:29:28.644156   11997 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:29:28.644308   11997 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:28.644311   11997 out.go:358] Setting ErrFile to fd 2...
	I1010 11:29:28.644314   11997 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:28.644451   11997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:29:28.645554   11997 out.go:352] Setting JSON to false
	I1010 11:29:28.662986   11997 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7139,"bootTime":1728577829,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:29:28.663053   11997 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:29:28.668162   11997 out.go:177] * [ha-740000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:29:28.675052   11997 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:29:28.675092   11997 notify.go:220] Checking for updates...
	I1010 11:29:28.682066   11997 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:29:28.685091   11997 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:29:28.688032   11997 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:29:28.691074   11997 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:29:28.693987   11997 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:29:28.697316   11997 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:29:28.697582   11997 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:29:28.702060   11997 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:29:28.709057   11997 start.go:297] selected driver: qemu2
	I1010 11:29:28.709063   11997 start.go:901] validating driver "qemu2" against &{Name:ha-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:29:28.709111   11997 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:29:28.711589   11997 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:29:28.711624   11997 cni.go:84] Creating CNI manager for ""
	I1010 11:29:28.711647   11997 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 11:29:28.711695   11997 start.go:340] cluster config:
	{Name:ha-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-740000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:29:28.716226   11997 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:29:28.725003   11997 out.go:177] * Starting "ha-740000" primary control-plane node in "ha-740000" cluster
	I1010 11:29:28.729070   11997 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:29:28.729082   11997 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:29:28.729087   11997 cache.go:56] Caching tarball of preloaded images
	I1010 11:29:28.729139   11997 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:29:28.729144   11997 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:29:28.729197   11997 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/ha-740000/config.json ...
	I1010 11:29:28.729591   11997 start.go:360] acquireMachinesLock for ha-740000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:29:28.729620   11997 start.go:364] duration metric: took 23.542µs to acquireMachinesLock for "ha-740000"
	I1010 11:29:28.729630   11997 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:29:28.729635   11997 fix.go:54] fixHost starting: 
	I1010 11:29:28.729752   11997 fix.go:112] recreateIfNeeded on ha-740000: state=Stopped err=<nil>
	W1010 11:29:28.729759   11997 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:29:28.734056   11997 out.go:177] * Restarting existing qemu2 VM for "ha-740000" ...
	I1010 11:29:28.741009   11997 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:29:28.741040   11997 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:95:63:61:db:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2
	I1010 11:29:28.743189   11997 main.go:141] libmachine: STDOUT: 
	I1010 11:29:28.743206   11997 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:29:28.743235   11997 fix.go:56] duration metric: took 13.597917ms for fixHost
	I1010 11:29:28.743240   11997 start.go:83] releasing machines lock for "ha-740000", held for 13.616084ms
	W1010 11:29:28.743246   11997 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:29:28.743289   11997 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:29:28.743294   11997 start.go:729] Will try again in 5 seconds ...
	I1010 11:29:33.745462   11997 start.go:360] acquireMachinesLock for ha-740000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:29:33.745894   11997 start.go:364] duration metric: took 327.541µs to acquireMachinesLock for "ha-740000"
	I1010 11:29:33.745993   11997 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:29:33.746020   11997 fix.go:54] fixHost starting: 
	I1010 11:29:33.746678   11997 fix.go:112] recreateIfNeeded on ha-740000: state=Stopped err=<nil>
	W1010 11:29:33.746707   11997 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:29:33.752333   11997 out.go:177] * Restarting existing qemu2 VM for "ha-740000" ...
	I1010 11:29:33.760245   11997 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:29:33.760524   11997 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:95:63:61:db:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/ha-740000/disk.qcow2
	I1010 11:29:33.771014   11997 main.go:141] libmachine: STDOUT: 
	I1010 11:29:33.771073   11997 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:29:33.771158   11997 fix.go:56] duration metric: took 25.138916ms for fixHost
	I1010 11:29:33.771181   11997 start.go:83] releasing machines lock for "ha-740000", held for 25.266291ms
	W1010 11:29:33.771358   11997 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-740000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-740000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:29:33.778218   11997 out.go:201] 
	W1010 11:29:33.781421   11997 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:29:33.781494   11997 out.go:270] * 
	* 
	W1010 11:29:33.784019   11997 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:29:33.791232   11997 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-740000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (73.027084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-740000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-740000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-740000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-740000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.579125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-740000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-740000 --control-plane -v=7 --alsologtostderr: exit status 83 (44.932458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-740000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-740000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:29:33.997160   12012 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:29:33.997341   12012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:33.997344   12012 out.go:358] Setting ErrFile to fd 2...
	I1010 11:29:33.997346   12012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:29:33.997465   12012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:29:33.997683   12012 mustload.go:65] Loading cluster: ha-740000
	I1010 11:29:33.997906   12012 config.go:182] Loaded profile config "ha-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:29:34.002316   12012 out.go:177] * The control-plane node ha-740000 host is not running: state=Stopped
	I1010 11:29:34.006238   12012 out.go:177]   To start a cluster, run: "minikube start -p ha-740000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-740000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.854583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-740000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-740000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-740000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-740000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-740000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-740000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-740000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-740000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-740000 -n ha-740000: exit status 7 (33.415625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-065000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-065000 --driver=qemu2 : exit status 80 (9.758475875s)

                                                
                                                
-- stdout --
	* [image-065000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-065000" primary control-plane node in "image-065000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-065000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-065000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-065000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-065000 -n image-065000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-065000 -n image-065000: exit status 7 (72.390167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-065000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-617000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-617000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.734198834s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"24c364fb-ae40-488d-8b63-82545359b97b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"087aafc3-3187-446f-ad86-a949ca1a9b6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19787"}}
	{"specversion":"1.0","id":"7d0bc487-e8df-496d-bfe8-45a8fbeb2da1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig"}}
	{"specversion":"1.0","id":"8b9169b5-e1fb-425e-9825-f5342f59dfe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"80ca97f1-fa9e-4318-876c-0c463f177cc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a7f59318-5d4e-431f-8e72-f19c403cefdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube"}}
	{"specversion":"1.0","id":"9e12a84c-6185-44af-ae64-fd5226c72efe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ff9c9fc9-e17c-4b4d-a6e4-278b8daddae6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6424ffe1-6200-4ec9-914c-7b2bb8357597","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"36c01d00-dfff-4e34-9892-a5cd8835de1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-617000\" primary control-plane node in \"json-output-617000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"23d69fed-19ad-46bd-a87b-fd9dd4200945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"881b0d22-b181-4503-af16-6eb493c64587","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-617000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"086c0936-b5e6-49eb-9bd7-012a64993bd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a2bccf3a-4718-4c3b-82ba-ac2ce6eed216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"4a3eab50-b5a6-43f8-b439-690b2e0b6921","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-617000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"0aa5ca7f-0960-421b-b9eb-d26ae5dce57e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"85ef0ffa-03a9-4614-95cb-1a8b9fecc03e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-617000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-617000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-617000 --output=json --user=testUser: exit status 83 (83.360083ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"861f6535-dfc6-4c43-9fe3-2d476836de14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-617000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"95753d46-c023-462e-ab18-7cb05f3dc309","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-617000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-617000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-617000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-617000 --output=json --user=testUser: exit status 83 (55.391709ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-617000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-617000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-617000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-617000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (10.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-708000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-708000 --driver=qemu2 : exit status 80 (9.897128375s)

                                                
                                                
-- stdout --
	* [first-708000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-708000" primary control-plane node in "first-708000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-708000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-708000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-10 11:30:07.819413 -0700 PDT m=+466.618059876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-710000 -n second-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-710000 -n second-710000: exit status 85 (84.110458ms)

                                                
                                                
-- stdout --
	* Profile "second-710000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-710000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-710000" host is not running, skipping log retrieval (state="* Profile \"second-710000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-710000\"")
helpers_test.go:175: Cleaning up "second-710000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-710000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-10 11:30:08.019583 -0700 PDT m=+466.818231710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-708000 -n first-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-708000 -n first-708000: exit status 7 (33.2405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-708000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-708000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-708000
--- FAIL: TestMinikubeProfile (10.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-635000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-635000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.8613645s)

                                                
                                                
-- stdout --
	* [mount-start-1-635000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-635000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-635000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-635000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-635000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-635000 -n mount-start-1-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-635000 -n mount-start-1-635000: exit status 7 (71.153917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-635000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-849000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-849000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.951709792s)

                                                
                                                
-- stdout --
	* [multinode-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-849000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:30:18.287812   12459 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:30:18.287958   12459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:30:18.287961   12459 out.go:358] Setting ErrFile to fd 2...
	I1010 11:30:18.287963   12459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:30:18.288086   12459 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:30:18.289218   12459 out.go:352] Setting JSON to false
	I1010 11:30:18.306797   12459 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7189,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:30:18.306869   12459 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:30:18.311940   12459 out.go:177] * [multinode-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:30:18.318951   12459 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:30:18.319016   12459 notify.go:220] Checking for updates...
	I1010 11:30:18.325922   12459 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:30:18.328920   12459 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:30:18.331890   12459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:30:18.334937   12459 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:30:18.337930   12459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:30:18.341039   12459 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:30:18.344882   12459 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:30:18.350917   12459 start.go:297] selected driver: qemu2
	I1010 11:30:18.350922   12459 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:30:18.350927   12459 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:30:18.353311   12459 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:30:18.356861   12459 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:30:18.359982   12459 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:30:18.359999   12459 cni.go:84] Creating CNI manager for ""
	I1010 11:30:18.360016   12459 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1010 11:30:18.360019   12459 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 11:30:18.360049   12459 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:30:18.364600   12459 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:30:18.372928   12459 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I1010 11:30:18.376906   12459 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:30:18.376927   12459 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:30:18.376933   12459 cache.go:56] Caching tarball of preloaded images
	I1010 11:30:18.377012   12459 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:30:18.377018   12459 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:30:18.377244   12459 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/multinode-849000/config.json ...
	I1010 11:30:18.377255   12459 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/multinode-849000/config.json: {Name:mk54b9040b7b836b5059545a14d90eb2aade3150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:30:18.377514   12459 start.go:360] acquireMachinesLock for multinode-849000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:30:18.377562   12459 start.go:364] duration metric: took 42.792µs to acquireMachinesLock for "multinode-849000"
	I1010 11:30:18.377575   12459 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:30:18.377613   12459 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:30:18.384910   12459 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:30:18.402147   12459 start.go:159] libmachine.API.Create for "multinode-849000" (driver="qemu2")
	I1010 11:30:18.402168   12459 client.go:168] LocalClient.Create starting
	I1010 11:30:18.402240   12459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:30:18.402276   12459 main.go:141] libmachine: Decoding PEM data...
	I1010 11:30:18.402290   12459 main.go:141] libmachine: Parsing certificate...
	I1010 11:30:18.402325   12459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:30:18.402354   12459 main.go:141] libmachine: Decoding PEM data...
	I1010 11:30:18.402363   12459 main.go:141] libmachine: Parsing certificate...
	I1010 11:30:18.402688   12459 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:30:18.553897   12459 main.go:141] libmachine: Creating SSH key...
	I1010 11:30:18.648478   12459 main.go:141] libmachine: Creating Disk image...
	I1010 11:30:18.648484   12459 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:30:18.648663   12459 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2
	I1010 11:30:18.658172   12459 main.go:141] libmachine: STDOUT: 
	I1010 11:30:18.658201   12459 main.go:141] libmachine: STDERR: 
	I1010 11:30:18.658264   12459 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2 +20000M
	I1010 11:30:18.666549   12459 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:30:18.666564   12459 main.go:141] libmachine: STDERR: 
	I1010 11:30:18.666583   12459 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2
	I1010 11:30:18.666588   12459 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:30:18.666601   12459 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:30:18.666637   12459 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:c3:bb:b3:a9:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2
	I1010 11:30:18.668337   12459 main.go:141] libmachine: STDOUT: 
	I1010 11:30:18.668351   12459 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:30:18.668369   12459 client.go:171] duration metric: took 266.198167ms to LocalClient.Create
	I1010 11:30:20.670534   12459 start.go:128] duration metric: took 2.292918125s to createHost
	I1010 11:30:20.670592   12459 start.go:83] releasing machines lock for "multinode-849000", held for 2.293043125s
	W1010 11:30:20.670639   12459 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:30:20.677914   12459 out.go:177] * Deleting "multinode-849000" in qemu2 ...
	W1010 11:30:20.704004   12459 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:30:20.704030   12459 start.go:729] Will try again in 5 seconds ...
	I1010 11:30:25.706299   12459 start.go:360] acquireMachinesLock for multinode-849000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:30:25.706769   12459 start.go:364] duration metric: took 373.208µs to acquireMachinesLock for "multinode-849000"
	I1010 11:30:25.706866   12459 start.go:93] Provisioning new machine with config: &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:30:25.707162   12459 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:30:25.716898   12459 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:30:25.766069   12459 start.go:159] libmachine.API.Create for "multinode-849000" (driver="qemu2")
	I1010 11:30:25.766123   12459 client.go:168] LocalClient.Create starting
	I1010 11:30:25.766250   12459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:30:25.766343   12459 main.go:141] libmachine: Decoding PEM data...
	I1010 11:30:25.766359   12459 main.go:141] libmachine: Parsing certificate...
	I1010 11:30:25.766425   12459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:30:25.766489   12459 main.go:141] libmachine: Decoding PEM data...
	I1010 11:30:25.766505   12459 main.go:141] libmachine: Parsing certificate...
	I1010 11:30:25.767158   12459 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:30:25.927256   12459 main.go:141] libmachine: Creating SSH key...
	I1010 11:30:26.143289   12459 main.go:141] libmachine: Creating Disk image...
	I1010 11:30:26.143299   12459 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:30:26.143542   12459 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2
	I1010 11:30:26.153610   12459 main.go:141] libmachine: STDOUT: 
	I1010 11:30:26.153628   12459 main.go:141] libmachine: STDERR: 
	I1010 11:30:26.153684   12459 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2 +20000M
	I1010 11:30:26.162052   12459 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:30:26.162067   12459 main.go:141] libmachine: STDERR: 
	I1010 11:30:26.162079   12459 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2
	I1010 11:30:26.162083   12459 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:30:26.162093   12459 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:30:26.162129   12459 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:26:e3:64:f5:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2
	I1010 11:30:26.163819   12459 main.go:141] libmachine: STDOUT: 
	I1010 11:30:26.163833   12459 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:30:26.163845   12459 client.go:171] duration metric: took 397.719625ms to LocalClient.Create
	I1010 11:30:28.166057   12459 start.go:128] duration metric: took 2.458894709s to createHost
	I1010 11:30:28.166118   12459 start.go:83] releasing machines lock for "multinode-849000", held for 2.459349709s
	W1010 11:30:28.166504   12459 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-849000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-849000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:30:28.175218   12459 out.go:201] 
	W1010 11:30:28.179548   12459 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:30:28.179575   12459 out.go:270] * 
	* 
	W1010 11:30:28.181514   12459 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:30:28.192301   12459 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-849000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (74.75175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (91.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (62.839666ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-849000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- rollout status deployment/busybox: exit status 1 (60.301667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (60.265792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:30:28.468536   11135 retry.go:31] will retry after 1.399151985s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.770833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:30:29.978846   11135 retry.go:31] will retry after 2.216082509s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.656958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:30:32.302972   11135 retry.go:31] will retry after 1.310236136s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.158875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:30:33.720697   11135 retry.go:31] will retry after 4.847279564s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.249708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:30:38.676560   11135 retry.go:31] will retry after 4.491990542s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.320666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:30:43.277181   11135 retry.go:31] will retry after 7.796620758s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.595208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:30:51.186699   11135 retry.go:31] will retry after 9.724527927s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.65325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:31:01.022236   11135 retry.go:31] will retry after 23.430775185s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.445792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1010 11:31:24.557883   11135 retry.go:31] will retry after 34.710615855s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.704125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.368292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.811084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- exec  -- nslookup kubernetes.default: exit status 1 (60.148125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.515375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (33.826791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (91.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-849000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.660334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (33.891083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-849000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-849000 -v 3 --alsologtostderr: exit status 83 (49.242417ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-849000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-849000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:31:59.774389   12540 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:31:59.774570   12540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:31:59.774573   12540 out.go:358] Setting ErrFile to fd 2...
	I1010 11:31:59.774576   12540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:31:59.774684   12540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:31:59.774913   12540 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:31:59.775115   12540 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:31:59.780053   12540 out.go:177] * The control-plane node multinode-849000 host is not running: state=Stopped
	I1010 11:31:59.788029   12540 out.go:177]   To start a cluster, run: "minikube start -p multinode-849000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-849000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (33.463334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-849000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-849000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.497875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-849000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-849000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-849000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (33.671375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-849000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-849000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-849000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-849000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (32.889583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status --output json --alsologtostderr: exit status 7 (33.811125ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-849000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:00.003153   12552 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:00.003326   12552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:00.003329   12552 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:00.003332   12552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:00.003481   12552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:00.003597   12552 out.go:352] Setting JSON to true
	I1010 11:32:00.003609   12552 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:00.003665   12552 notify.go:220] Checking for updates...
	I1010 11:32:00.003808   12552 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:00.003818   12552 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:00.004054   12552 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:00.004058   12552 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:00.004060   12552 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-849000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (33.2535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 node stop m03: exit status 85 (50.919042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-849000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status: exit status 7 (34.114584ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status --alsologtostderr: exit status 7 (33.08625ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:00.155517   12560 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:00.155685   12560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:00.155688   12560 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:00.155691   12560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:00.155809   12560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:00.155923   12560 out.go:352] Setting JSON to false
	I1010 11:32:00.155934   12560 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:00.155988   12560 notify.go:220] Checking for updates...
	I1010 11:32:00.156143   12560 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:00.156152   12560 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:00.156380   12560 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:00.156384   12560 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:00.156386   12560 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-849000 status --alsologtostderr": multinode-849000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (33.567333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 node start m03 -v=7 --alsologtostderr: exit status 85 (51.550834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:00.222442   12564 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:00.222816   12564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:00.222819   12564 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:00.222822   12564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:00.222939   12564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:00.223157   12564 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:00.223357   12564 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:00.227905   12564 out.go:201] 
	W1010 11:32:00.232085   12564 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1010 11:32:00.232090   12564 out.go:270] * 
	* 
	W1010 11:32:00.233902   12564 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:32:00.238075   12564 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1010 11:32:00.222442   12564 out.go:345] Setting OutFile to fd 1 ...
I1010 11:32:00.222816   12564 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:32:00.222819   12564 out.go:358] Setting ErrFile to fd 2...
I1010 11:32:00.222822   12564 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 11:32:00.222939   12564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
I1010 11:32:00.223157   12564 mustload.go:65] Loading cluster: multinode-849000
I1010 11:32:00.223357   12564 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1010 11:32:00.227905   12564 out.go:201] 
W1010 11:32:00.232085   12564 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1010 11:32:00.232090   12564 out.go:270] * 
* 
W1010 11:32:00.233902   12564 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1010 11:32:00.238075   12564 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-849000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr: exit status 7 (33.928333ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:00.274426   12566 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:00.274851   12566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:00.274856   12566 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:00.274859   12566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:00.275026   12566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:00.275190   12566 out.go:352] Setting JSON to false
	I1010 11:32:00.275202   12566 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:00.275413   12566 notify.go:220] Checking for updates...
	I1010 11:32:00.275693   12566 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:00.275703   12566 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:00.275925   12566 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:00.275930   12566 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:00.275932   12566 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:32:00.276858   11135 retry.go:31] will retry after 619.064901ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr: exit status 7 (77.56975ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:00.973738   12568 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:00.973935   12568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:00.973939   12568 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:00.973942   12568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:00.974113   12568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:00.974290   12568 out.go:352] Setting JSON to false
	I1010 11:32:00.974304   12568 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:00.974352   12568 notify.go:220] Checking for updates...
	I1010 11:32:00.974551   12568 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:00.974563   12568 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:00.974868   12568 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:00.974872   12568 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:00.974875   12568 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:32:00.975878   11135 retry.go:31] will retry after 953.056411ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr: exit status 7 (74.937542ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:02.004074   12572 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:02.004284   12572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:02.004288   12572 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:02.004292   12572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:02.004476   12572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:02.004632   12572 out.go:352] Setting JSON to false
	I1010 11:32:02.004647   12572 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:02.004680   12572 notify.go:220] Checking for updates...
	I1010 11:32:02.004908   12572 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:02.004919   12572 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:02.005230   12572 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:02.005234   12572 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:02.005237   12572 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:32:02.006333   11135 retry.go:31] will retry after 2.081604782s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr: exit status 7 (76.572333ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:04.164731   12574 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:04.164928   12574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:04.164932   12574 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:04.164935   12574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:04.165094   12574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:04.165249   12574 out.go:352] Setting JSON to false
	I1010 11:32:04.165264   12574 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:04.165300   12574 notify.go:220] Checking for updates...
	I1010 11:32:04.165514   12574 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:04.165525   12574 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:04.165842   12574 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:04.165847   12574 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:04.165850   12574 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:32:04.166917   11135 retry.go:31] will retry after 1.841725336s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr: exit status 7 (79.057667ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:06.087903   12576 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:06.088102   12576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:06.088107   12576 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:06.088109   12576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:06.088280   12576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:06.088430   12576 out.go:352] Setting JSON to false
	I1010 11:32:06.088445   12576 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:06.088484   12576 notify.go:220] Checking for updates...
	I1010 11:32:06.088718   12576 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:06.088729   12576 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:06.089036   12576 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:06.089041   12576 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:06.089044   12576 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:32:06.090087   11135 retry.go:31] will retry after 4.439922075s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr: exit status 7 (76.283292ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:10.606430   12578 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:10.606621   12578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:10.606625   12578 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:10.606628   12578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:10.606803   12578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:10.606950   12578 out.go:352] Setting JSON to false
	I1010 11:32:10.606965   12578 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:10.607007   12578 notify.go:220] Checking for updates...
	I1010 11:32:10.607221   12578 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:10.607234   12578 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:10.607546   12578 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:10.607551   12578 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:10.607553   12578 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:32:10.608596   11135 retry.go:31] will retry after 9.132406034s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr: exit status 7 (75.097708ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:19.816292   12583 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:19.816487   12583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:19.816491   12583 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:19.816495   12583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:19.816670   12583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:19.816812   12583 out.go:352] Setting JSON to false
	I1010 11:32:19.816827   12583 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:19.816867   12583 notify.go:220] Checking for updates...
	I1010 11:32:19.817103   12583 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:19.817113   12583 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:19.817446   12583 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:19.817451   12583 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:19.817453   12583 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:32:19.818500   11135 retry.go:31] will retry after 11.213573944s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr: exit status 7 (78.861833ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:31.110986   12585 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:31.111192   12585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:31.111196   12585 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:31.111200   12585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:31.111370   12585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:31.111508   12585 out.go:352] Setting JSON to false
	I1010 11:32:31.111524   12585 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:31.111563   12585 notify.go:220] Checking for updates...
	I1010 11:32:31.111783   12585 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:31.111793   12585 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:31.112113   12585 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:31.112118   12585 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:31.112120   12585 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1010 11:32:31.113224   11135 retry.go:31] will retry after 20.926547906s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr: exit status 7 (76.501125ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:52.115991   12590 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:52.116177   12590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:52.116181   12590 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:52.116185   12590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:52.116346   12590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:52.116511   12590 out.go:352] Setting JSON to false
	I1010 11:32:52.116526   12590 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:32:52.116563   12590 notify.go:220] Checking for updates...
	I1010 11:32:52.116800   12590 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:52.116812   12590 status.go:174] checking status of multinode-849000 ...
	I1010 11:32:52.117128   12590 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:32:52.117132   12590 status.go:384] host is not running, skipping remaining checks
	I1010 11:32:52.117135   12590 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-849000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (36.258458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-849000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-849000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-849000: (3.308941625s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-849000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-849000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.254842042s)

                                                
                                                
-- stdout --
	* [multinode-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	* Restarting existing qemu2 VM for "multinode-849000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-849000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:32:55.566485   12614 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:32:55.566679   12614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:55.566684   12614 out.go:358] Setting ErrFile to fd 2...
	I1010 11:32:55.566687   12614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:32:55.566868   12614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:32:55.568144   12614 out.go:352] Setting JSON to false
	I1010 11:32:55.588517   12614 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7346,"bootTime":1728577829,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:32:55.588583   12614 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:32:55.592239   12614 out.go:177] * [multinode-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:32:55.599155   12614 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:32:55.599192   12614 notify.go:220] Checking for updates...
	I1010 11:32:55.606161   12614 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:32:55.614112   12614 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:32:55.622090   12614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:32:55.630060   12614 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:32:55.637929   12614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:32:55.642376   12614 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:32:55.642424   12614 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:32:55.647164   12614 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:32:55.654055   12614 start.go:297] selected driver: qemu2
	I1010 11:32:55.654060   12614 start.go:901] validating driver "qemu2" against &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:32:55.654104   12614 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:32:55.656775   12614 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:32:55.656803   12614 cni.go:84] Creating CNI manager for ""
	I1010 11:32:55.656840   12614 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 11:32:55.656897   12614 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-849000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:32:55.661575   12614 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:32:55.665020   12614 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I1010 11:32:55.673071   12614 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:32:55.673093   12614 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:32:55.673101   12614 cache.go:56] Caching tarball of preloaded images
	I1010 11:32:55.673182   12614 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:32:55.673188   12614 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:32:55.673254   12614 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/multinode-849000/config.json ...
	I1010 11:32:55.673585   12614 start.go:360] acquireMachinesLock for multinode-849000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:32:55.673639   12614 start.go:364] duration metric: took 47.208µs to acquireMachinesLock for "multinode-849000"
	I1010 11:32:55.673650   12614 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:32:55.673655   12614 fix.go:54] fixHost starting: 
	I1010 11:32:55.673786   12614 fix.go:112] recreateIfNeeded on multinode-849000: state=Stopped err=<nil>
	W1010 11:32:55.673797   12614 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:32:55.678078   12614 out.go:177] * Restarting existing qemu2 VM for "multinode-849000" ...
	I1010 11:32:55.686000   12614 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:32:55.686053   12614 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:26:e3:64:f5:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2
	I1010 11:32:55.688484   12614 main.go:141] libmachine: STDOUT: 
	I1010 11:32:55.688508   12614 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:32:55.688538   12614 fix.go:56] duration metric: took 14.881167ms for fixHost
	I1010 11:32:55.688544   12614 start.go:83] releasing machines lock for "multinode-849000", held for 14.900375ms
	W1010 11:32:55.688551   12614 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:32:55.688601   12614 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:32:55.688607   12614 start.go:729] Will try again in 5 seconds ...
	I1010 11:33:00.690797   12614 start.go:360] acquireMachinesLock for multinode-849000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:33:00.691158   12614 start.go:364] duration metric: took 265.708µs to acquireMachinesLock for "multinode-849000"
	I1010 11:33:00.691270   12614 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:33:00.691288   12614 fix.go:54] fixHost starting: 
	I1010 11:33:00.691976   12614 fix.go:112] recreateIfNeeded on multinode-849000: state=Stopped err=<nil>
	W1010 11:33:00.692003   12614 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:33:00.696481   12614 out.go:177] * Restarting existing qemu2 VM for "multinode-849000" ...
	I1010 11:33:00.704383   12614 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:33:00.704687   12614 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:26:e3:64:f5:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2
	I1010 11:33:00.714270   12614 main.go:141] libmachine: STDOUT: 
	I1010 11:33:00.714322   12614 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:33:00.714378   12614 fix.go:56] duration metric: took 23.090834ms for fixHost
	I1010 11:33:00.714399   12614 start.go:83] releasing machines lock for "multinode-849000", held for 23.210875ms
	W1010 11:33:00.714630   12614 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-849000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-849000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:33:00.723368   12614 out.go:201] 
	W1010 11:33:00.727491   12614 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:33:00.727526   12614 out.go:270] * 
	* 
	W1010 11:33:00.730012   12614 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:33:00.738444   12614 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-849000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-849000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (35.905417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 node delete m03: exit status 83 (43.382291ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-849000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-849000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-849000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status --alsologtostderr: exit status 7 (33.6885ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:33:00.936874   12628 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:33:00.937042   12628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:33:00.937045   12628 out.go:358] Setting ErrFile to fd 2...
	I1010 11:33:00.937047   12628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:33:00.937172   12628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:33:00.937285   12628 out.go:352] Setting JSON to false
	I1010 11:33:00.937295   12628 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:33:00.937339   12628 notify.go:220] Checking for updates...
	I1010 11:33:00.937517   12628 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:33:00.937526   12628 status.go:174] checking status of multinode-849000 ...
	I1010 11:33:00.937770   12628 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:33:00.937773   12628 status.go:384] host is not running, skipping remaining checks
	I1010 11:33:00.937775   12628 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-849000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (33.362958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-849000 stop: (3.471713667s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status: exit status 7 (71.326417ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-849000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-849000 status --alsologtostderr: exit status 7 (34.831208ms)

                                                
                                                
-- stdout --
	multinode-849000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:33:04.548694   12652 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:33:04.548858   12652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:33:04.548862   12652 out.go:358] Setting ErrFile to fd 2...
	I1010 11:33:04.548864   12652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:33:04.548987   12652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:33:04.549113   12652 out.go:352] Setting JSON to false
	I1010 11:33:04.549125   12652 mustload.go:65] Loading cluster: multinode-849000
	I1010 11:33:04.549166   12652 notify.go:220] Checking for updates...
	I1010 11:33:04.549318   12652 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:33:04.549326   12652 status.go:174] checking status of multinode-849000 ...
	I1010 11:33:04.549566   12652 status.go:371] multinode-849000 host status = "Stopped" (err=<nil>)
	I1010 11:33:04.549570   12652 status.go:384] host is not running, skipping remaining checks
	I1010 11:33:04.549572   12652 status.go:176] multinode-849000 status: &{Name:multinode-849000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-849000 status --alsologtostderr": multinode-849000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-849000 status --alsologtostderr": multinode-849000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (33.256792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-849000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-849000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.199041584s)

                                                
                                                
-- stdout --
	* [multinode-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	* Restarting existing qemu2 VM for "multinode-849000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-849000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:33:04.615290   12656 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:33:04.615432   12656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:33:04.615435   12656 out.go:358] Setting ErrFile to fd 2...
	I1010 11:33:04.615438   12656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:33:04.615573   12656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:33:04.616643   12656 out.go:352] Setting JSON to false
	I1010 11:33:04.634027   12656 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7355,"bootTime":1728577829,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:33:04.634123   12656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:33:04.638783   12656 out.go:177] * [multinode-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:33:04.646727   12656 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:33:04.646765   12656 notify.go:220] Checking for updates...
	I1010 11:33:04.653740   12656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:33:04.656721   12656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:33:04.663795   12656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:33:04.670723   12656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:33:04.678663   12656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:33:04.682037   12656 config.go:182] Loaded profile config "multinode-849000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:33:04.682317   12656 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:33:04.686596   12656 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:33:04.693730   12656 start.go:297] selected driver: qemu2
	I1010 11:33:04.693741   12656 start.go:901] validating driver "qemu2" against &{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-849000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:33:04.693826   12656 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:33:04.696556   12656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:33:04.696579   12656 cni.go:84] Creating CNI manager for ""
	I1010 11:33:04.696603   12656 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 11:33:04.696649   12656 start.go:340] cluster config:
	{Name:multinode-849000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-849000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:33:04.701407   12656 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:33:04.709759   12656 out.go:177] * Starting "multinode-849000" primary control-plane node in "multinode-849000" cluster
	I1010 11:33:04.712776   12656 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:33:04.712799   12656 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:33:04.712807   12656 cache.go:56] Caching tarball of preloaded images
	I1010 11:33:04.712860   12656 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:33:04.712865   12656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:33:04.712926   12656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/multinode-849000/config.json ...
	I1010 11:33:04.713426   12656 start.go:360] acquireMachinesLock for multinode-849000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:33:04.713459   12656 start.go:364] duration metric: took 26.666µs to acquireMachinesLock for "multinode-849000"
	I1010 11:33:04.713482   12656 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:33:04.713487   12656 fix.go:54] fixHost starting: 
	I1010 11:33:04.713625   12656 fix.go:112] recreateIfNeeded on multinode-849000: state=Stopped err=<nil>
	W1010 11:33:04.713638   12656 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:33:04.721793   12656 out.go:177] * Restarting existing qemu2 VM for "multinode-849000" ...
	I1010 11:33:04.724730   12656 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:33:04.724772   12656 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:26:e3:64:f5:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2
	I1010 11:33:04.726999   12656 main.go:141] libmachine: STDOUT: 
	I1010 11:33:04.727020   12656 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:33:04.727050   12656 fix.go:56] duration metric: took 13.562625ms for fixHost
	I1010 11:33:04.727055   12656 start.go:83] releasing machines lock for "multinode-849000", held for 13.591125ms
	W1010 11:33:04.727061   12656 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:33:04.727100   12656 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:33:04.727105   12656 start.go:729] Will try again in 5 seconds ...
	I1010 11:33:09.727618   12656 start.go:360] acquireMachinesLock for multinode-849000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:33:09.728043   12656 start.go:364] duration metric: took 304.75µs to acquireMachinesLock for "multinode-849000"
	I1010 11:33:09.728162   12656 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:33:09.728182   12656 fix.go:54] fixHost starting: 
	I1010 11:33:09.728924   12656 fix.go:112] recreateIfNeeded on multinode-849000: state=Stopped err=<nil>
	W1010 11:33:09.728949   12656 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:33:09.733452   12656 out.go:177] * Restarting existing qemu2 VM for "multinode-849000" ...
	I1010 11:33:09.737509   12656 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:33:09.737755   12656 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:26:e3:64:f5:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/multinode-849000/disk.qcow2
	I1010 11:33:09.747913   12656 main.go:141] libmachine: STDOUT: 
	I1010 11:33:09.747996   12656 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:33:09.748081   12656 fix.go:56] duration metric: took 19.8975ms for fixHost
	I1010 11:33:09.748098   12656 start.go:83] releasing machines lock for "multinode-849000", held for 20.016ms
	W1010 11:33:09.748401   12656 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-849000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-849000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:33:09.755432   12656 out.go:201] 
	W1010 11:33:09.759510   12656 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:33:09.759603   12656 out.go:270] * 
	* 
	W1010 11:33:09.762089   12656 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:33:09.769349   12656 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-849000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (73.5385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-849000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-849000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-849000-m01 --driver=qemu2 : exit status 80 (9.880234584s)

                                                
                                                
-- stdout --
	* [multinode-849000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-849000-m01" primary control-plane node in "multinode-849000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-849000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-849000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-849000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-849000-m02 --driver=qemu2 : exit status 80 (10.050246417s)

                                                
                                                
-- stdout --
	* [multinode-849000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-849000-m02" primary control-plane node in "multinode-849000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-849000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-849000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-849000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-849000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-849000: exit status 83 (85.307041ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-849000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-849000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-849000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-849000 -n multinode-849000: exit status 7 (34.757708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-849000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.18s)

                                                
                                    
x
+
TestPreload (10.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-288000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-288000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.864140583s)

                                                
                                                
-- stdout --
	* [test-preload-288000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-288000" primary control-plane node in "test-preload-288000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-288000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:33:30.173931   12710 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:33:30.174077   12710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:33:30.174081   12710 out.go:358] Setting ErrFile to fd 2...
	I1010 11:33:30.174083   12710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:33:30.174218   12710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:33:30.175366   12710 out.go:352] Setting JSON to false
	I1010 11:33:30.192649   12710 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7381,"bootTime":1728577829,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:33:30.192717   12710 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:33:30.198721   12710 out.go:177] * [test-preload-288000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:33:30.206652   12710 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:33:30.206722   12710 notify.go:220] Checking for updates...
	I1010 11:33:30.213071   12710 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:33:30.215694   12710 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:33:30.218680   12710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:33:30.221717   12710 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:33:30.224665   12710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:33:30.228012   12710 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:33:30.228065   12710 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:33:30.232692   12710 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:33:30.239662   12710 start.go:297] selected driver: qemu2
	I1010 11:33:30.239668   12710 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:33:30.239674   12710 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:33:30.242161   12710 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:33:30.245674   12710 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:33:30.248743   12710 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:33:30.248755   12710 cni.go:84] Creating CNI manager for ""
	I1010 11:33:30.248783   12710 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:33:30.248792   12710 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:33:30.248818   12710 start.go:340] cluster config:
	{Name:test-preload-288000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:33:30.253438   12710 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:33:30.259587   12710 out.go:177] * Starting "test-preload-288000" primary control-plane node in "test-preload-288000" cluster
	I1010 11:33:30.263627   12710 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1010 11:33:30.263699   12710 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/test-preload-288000/config.json ...
	I1010 11:33:30.263715   12710 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/test-preload-288000/config.json: {Name:mk9a336f27ab263760a78dad47134cd78f6d7078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:33:30.263722   12710 cache.go:107] acquiring lock: {Name:mkc32bb098d361ed5167827b37185371b0aeb7cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:33:30.263724   12710 cache.go:107] acquiring lock: {Name:mk131639b0486ec4ef54015493cb91c6adc3a86f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:33:30.263728   12710 cache.go:107] acquiring lock: {Name:mk1ab37a881d3d5c6010f01de36db23597836a44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:33:30.263737   12710 cache.go:107] acquiring lock: {Name:mk7cfbf6683f072b7ea4865d7851e43f8d5522a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:33:30.263703   12710 cache.go:107] acquiring lock: {Name:mk89864d4a71c1101f1bcc3d5dc60cc98a46db0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:33:30.263705   12710 cache.go:107] acquiring lock: {Name:mk77be75b5e1b9a503ab69ae758ae617e898c8fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:33:30.263962   12710 cache.go:107] acquiring lock: {Name:mk1453cc8a200c31671913d34d8bdc336aa6c68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:33:30.263971   12710 cache.go:107] acquiring lock: {Name:mk86ef144209b4b22b7c5e0c9a22f8015fe2aa17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:33:30.264008   12710 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1010 11:33:30.264029   12710 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1010 11:33:30.264054   12710 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1010 11:33:30.264139   12710 start.go:360] acquireMachinesLock for test-preload-288000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:33:30.264344   12710 start.go:364] duration metric: took 199.542µs to acquireMachinesLock for "test-preload-288000"
	I1010 11:33:30.264359   12710 start.go:93] Provisioning new machine with config: &{Name:test-preload-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:33:30.264386   12710 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:33:30.264411   12710 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:33:30.264497   12710 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1010 11:33:30.264508   12710 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1010 11:33:30.264841   12710 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:33:30.264936   12710 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:33:30.268656   12710 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:33:30.276729   12710 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1010 11:33:30.278031   12710 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1010 11:33:30.278031   12710 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1010 11:33:30.278031   12710 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:33:30.278261   12710 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1010 11:33:30.278411   12710 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1010 11:33:30.279202   12710 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:33:30.279319   12710 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:33:30.286013   12710 start.go:159] libmachine.API.Create for "test-preload-288000" (driver="qemu2")
	I1010 11:33:30.286030   12710 client.go:168] LocalClient.Create starting
	I1010 11:33:30.286137   12710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:33:30.286178   12710 main.go:141] libmachine: Decoding PEM data...
	I1010 11:33:30.286195   12710 main.go:141] libmachine: Parsing certificate...
	I1010 11:33:30.286233   12710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:33:30.286264   12710 main.go:141] libmachine: Decoding PEM data...
	I1010 11:33:30.286273   12710 main.go:141] libmachine: Parsing certificate...
	I1010 11:33:30.286633   12710 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:33:30.441482   12710 main.go:141] libmachine: Creating SSH key...
	I1010 11:33:30.522761   12710 main.go:141] libmachine: Creating Disk image...
	I1010 11:33:30.522781   12710 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:33:30.522984   12710 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2
	I1010 11:33:30.532679   12710 main.go:141] libmachine: STDOUT: 
	I1010 11:33:30.532703   12710 main.go:141] libmachine: STDERR: 
	I1010 11:33:30.532768   12710 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2 +20000M
	I1010 11:33:30.541841   12710 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:33:30.541872   12710 main.go:141] libmachine: STDERR: 
	I1010 11:33:30.541886   12710 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2
	I1010 11:33:30.541892   12710 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:33:30.541908   12710 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:33:30.541954   12710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:0d:d7:f9:5e:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2
	I1010 11:33:30.544042   12710 main.go:141] libmachine: STDOUT: 
	I1010 11:33:30.544064   12710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:33:30.544088   12710 client.go:171] duration metric: took 258.05525ms to LocalClient.Create
	I1010 11:33:30.761762   12710 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1010 11:33:30.765804   12710 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1010 11:33:30.773643   12710 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1010 11:33:30.901910   12710 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1010 11:33:30.901929   12710 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 638.210959ms
	I1010 11:33:30.901937   12710 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1010 11:33:30.902782   12710 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1010 11:33:30.939842   12710 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W1010 11:33:30.993292   12710 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1010 11:33:30.993319   12710 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1010 11:33:31.002931   12710 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W1010 11:33:31.598704   12710 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1010 11:33:31.598811   12710 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 11:33:32.042971   12710 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1010 11:33:32.043034   12710 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.779346667s
	I1010 11:33:32.043067   12710 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1010 11:33:32.312477   12710 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1010 11:33:32.312522   12710 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.048814917s
	I1010 11:33:32.312555   12710 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1010 11:33:32.434062   12710 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1010 11:33:32.434118   12710 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.170198834s
	I1010 11:33:32.434147   12710 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1010 11:33:32.544518   12710 start.go:128] duration metric: took 2.28013275s to createHost
	I1010 11:33:32.544572   12710 start.go:83] releasing machines lock for "test-preload-288000", held for 2.280240916s
	W1010 11:33:32.544627   12710 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:33:32.554548   12710 out.go:177] * Deleting "test-preload-288000" in qemu2 ...
	W1010 11:33:32.577706   12710 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:33:32.577740   12710 start.go:729] Will try again in 5 seconds ...
	I1010 11:33:34.028125   12710 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1010 11:33:34.028218   12710 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 3.76455175s
	I1010 11:33:34.028252   12710 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1010 11:33:35.087258   12710 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1010 11:33:35.087307   12710 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.823639875s
	I1010 11:33:35.087354   12710 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1010 11:33:35.687686   12710 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1010 11:33:35.687739   12710 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.424048833s
	I1010 11:33:35.687789   12710 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1010 11:33:37.578280   12710 start.go:360] acquireMachinesLock for test-preload-288000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:33:37.578752   12710 start.go:364] duration metric: took 367.541µs to acquireMachinesLock for "test-preload-288000"
	I1010 11:33:37.578850   12710 start.go:93] Provisioning new machine with config: &{Name:test-preload-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:33:37.579065   12710 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:33:37.587633   12710 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:33:37.635404   12710 start.go:159] libmachine.API.Create for "test-preload-288000" (driver="qemu2")
	I1010 11:33:37.635627   12710 client.go:168] LocalClient.Create starting
	I1010 11:33:37.635769   12710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:33:37.635845   12710 main.go:141] libmachine: Decoding PEM data...
	I1010 11:33:37.635862   12710 main.go:141] libmachine: Parsing certificate...
	I1010 11:33:37.635927   12710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:33:37.635985   12710 main.go:141] libmachine: Decoding PEM data...
	I1010 11:33:37.636009   12710 main.go:141] libmachine: Parsing certificate...
	I1010 11:33:37.636511   12710 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:33:37.792790   12710 main.go:141] libmachine: Creating SSH key...
	I1010 11:33:37.942384   12710 main.go:141] libmachine: Creating Disk image...
	I1010 11:33:37.942394   12710 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:33:37.942579   12710 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2
	I1010 11:33:37.952568   12710 main.go:141] libmachine: STDOUT: 
	I1010 11:33:37.952598   12710 main.go:141] libmachine: STDERR: 
	I1010 11:33:37.952661   12710 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2 +20000M
	I1010 11:33:37.961320   12710 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:33:37.961337   12710 main.go:141] libmachine: STDERR: 
	I1010 11:33:37.961354   12710 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2
	I1010 11:33:37.961362   12710 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:33:37.961369   12710 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:33:37.961404   12710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:72:18:44:b9:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/test-preload-288000/disk.qcow2
	I1010 11:33:37.963282   12710 main.go:141] libmachine: STDOUT: 
	I1010 11:33:37.963299   12710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:33:37.963312   12710 client.go:171] duration metric: took 327.683083ms to LocalClient.Create
	I1010 11:33:39.963838   12710 start.go:128] duration metric: took 2.384763542s to createHost
	I1010 11:33:39.963891   12710 start.go:83] releasing machines lock for "test-preload-288000", held for 2.385136375s
	W1010 11:33:39.964125   12710 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-288000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-288000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:33:39.973296   12710 out.go:201] 
	W1010 11:33:39.977343   12710 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:33:39.977393   12710 out.go:270] * 
	* 
	W1010 11:33:39.980192   12710 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:33:39.991189   12710 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-288000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-10 11:33:40.008601 -0700 PDT m=+678.809331210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-288000 -n test-preload-288000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-288000 -n test-preload-288000: exit status 7 (70.513541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-288000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-288000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-288000
--- FAIL: TestPreload (10.02s)

                                                
                                    
x
+
TestScheduledStopUnix (9.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-206000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-206000 --memory=2048 --driver=qemu2 : exit status 80 (9.802340209s)

                                                
                                                
-- stdout --
	* [scheduled-stop-206000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-206000" primary control-plane node in "scheduled-stop-206000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-206000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-206000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-206000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-206000" primary control-plane node in "scheduled-stop-206000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-206000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-206000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-10 11:33:49.966174 -0700 PDT m=+688.767002251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-206000 -n scheduled-stop-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-206000 -n scheduled-stop-206000: exit status 7 (73.599042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-206000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-206000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-206000
--- FAIL: TestScheduledStopUnix (9.95s)

                                                
                                    
x
+
TestSkaffold (12.78s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3740204718 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3740204718 version: (1.020668959s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-831000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-831000 --memory=2600 --driver=qemu2 : exit status 80 (9.749730584s)

                                                
                                                
-- stdout --
	* [skaffold-831000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-831000" primary control-plane node in "skaffold-831000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-831000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-831000" primary control-plane node in "skaffold-831000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-10 11:34:02.741312 -0700 PDT m=+701.542265835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-831000 -n skaffold-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-831000 -n skaffold-831000: exit status 7 (68.651583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-831000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-831000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-831000
--- FAIL: TestSkaffold (12.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (606.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2825728446 start -p running-upgrade-704000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2825728446 start -p running-upgrade-704000 --memory=2200 --vm-driver=qemu2 : (57.43817075s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-704000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-704000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m35.111934166s)

                                                
                                                
-- stdout --
	* [running-upgrade-704000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-704000" primary control-plane node in "running-upgrade-704000" cluster
	* Updating the running qemu2 "running-upgrade-704000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:35:42.412212   13085 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:35:42.412535   13085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:35:42.412538   13085 out.go:358] Setting ErrFile to fd 2...
	I1010 11:35:42.412540   13085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:35:42.412677   13085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:35:42.413846   13085 out.go:352] Setting JSON to false
	I1010 11:35:42.432577   13085 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7513,"bootTime":1728577829,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:35:42.432650   13085 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:35:42.437771   13085 out.go:177] * [running-upgrade-704000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:35:42.444797   13085 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:35:42.444843   13085 notify.go:220] Checking for updates...
	I1010 11:35:42.453772   13085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:35:42.457778   13085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:35:42.460765   13085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:35:42.463818   13085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:35:42.466829   13085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:35:42.468442   13085 config.go:182] Loaded profile config "running-upgrade-704000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:35:42.471754   13085 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1010 11:35:42.474817   13085 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:35:42.478626   13085 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:35:42.485819   13085 start.go:297] selected driver: qemu2
	I1010 11:35:42.485825   13085 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-704000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53349 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-704000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1010 11:35:42.485873   13085 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:35:42.488822   13085 cni.go:84] Creating CNI manager for ""
	I1010 11:35:42.488855   13085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:35:42.488900   13085 start.go:340] cluster config:
	{Name:running-upgrade-704000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53349 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-704000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1010 11:35:42.488966   13085 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:35:42.496766   13085 out.go:177] * Starting "running-upgrade-704000" primary control-plane node in "running-upgrade-704000" cluster
	I1010 11:35:42.500804   13085 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1010 11:35:42.500817   13085 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1010 11:35:42.500825   13085 cache.go:56] Caching tarball of preloaded images
	I1010 11:35:42.500886   13085 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:35:42.500891   13085 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1010 11:35:42.500931   13085 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/config.json ...
	I1010 11:35:42.501283   13085 start.go:360] acquireMachinesLock for running-upgrade-704000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:35:42.501316   13085 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "running-upgrade-704000"
	I1010 11:35:42.501324   13085 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:35:42.501329   13085 fix.go:54] fixHost starting: 
	I1010 11:35:42.502036   13085 fix.go:112] recreateIfNeeded on running-upgrade-704000: state=Running err=<nil>
	W1010 11:35:42.502043   13085 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:35:42.505773   13085 out.go:177] * Updating the running qemu2 "running-upgrade-704000" VM ...
	I1010 11:35:42.513766   13085 machine.go:93] provisionDockerMachine start ...
	I1010 11:35:42.513822   13085 main.go:141] libmachine: Using SSH client type: native
	I1010 11:35:42.513929   13085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005d2480] 0x1005d4cc0 <nil>  [] 0s} localhost 53317 <nil> <nil>}
	I1010 11:35:42.513933   13085 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 11:35:42.574407   13085 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-704000
	
	I1010 11:35:42.574421   13085 buildroot.go:166] provisioning hostname "running-upgrade-704000"
	I1010 11:35:42.574489   13085 main.go:141] libmachine: Using SSH client type: native
	I1010 11:35:42.574600   13085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005d2480] 0x1005d4cc0 <nil>  [] 0s} localhost 53317 <nil> <nil>}
	I1010 11:35:42.574606   13085 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-704000 && echo "running-upgrade-704000" | sudo tee /etc/hostname
	I1010 11:35:42.636424   13085 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-704000
	
	I1010 11:35:42.636493   13085 main.go:141] libmachine: Using SSH client type: native
	I1010 11:35:42.636596   13085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005d2480] 0x1005d4cc0 <nil>  [] 0s} localhost 53317 <nil> <nil>}
	I1010 11:35:42.636606   13085 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-704000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-704000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-704000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 11:35:42.696280   13085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 11:35:42.696292   13085 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19787-10623/.minikube CaCertPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19787-10623/.minikube}
	I1010 11:35:42.696299   13085 buildroot.go:174] setting up certificates
	I1010 11:35:42.696303   13085 provision.go:84] configureAuth start
	I1010 11:35:42.696311   13085 provision.go:143] copyHostCerts
	I1010 11:35:42.696384   13085 exec_runner.go:144] found /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.pem, removing ...
	I1010 11:35:42.696391   13085 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.pem
	I1010 11:35:42.696502   13085 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.pem (1082 bytes)
	I1010 11:35:42.696690   13085 exec_runner.go:144] found /Users/jenkins/minikube-integration/19787-10623/.minikube/cert.pem, removing ...
	I1010 11:35:42.696693   13085 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19787-10623/.minikube/cert.pem
	I1010 11:35:42.696743   13085 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19787-10623/.minikube/cert.pem (1123 bytes)
	I1010 11:35:42.696863   13085 exec_runner.go:144] found /Users/jenkins/minikube-integration/19787-10623/.minikube/key.pem, removing ...
	I1010 11:35:42.696866   13085 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19787-10623/.minikube/key.pem
	I1010 11:35:42.696909   13085 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19787-10623/.minikube/key.pem (1675 bytes)
	I1010 11:35:42.697009   13085 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-704000 san=[127.0.0.1 localhost minikube running-upgrade-704000]
	I1010 11:35:42.832173   13085 provision.go:177] copyRemoteCerts
	I1010 11:35:42.832237   13085 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 11:35:42.832247   13085 sshutil.go:53] new ssh client: &{IP:localhost Port:53317 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/running-upgrade-704000/id_rsa Username:docker}
	I1010 11:35:42.864912   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 11:35:42.871441   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 11:35:42.878527   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 11:35:42.885848   13085 provision.go:87] duration metric: took 189.535584ms to configureAuth
	I1010 11:35:42.885857   13085 buildroot.go:189] setting minikube options for container-runtime
	I1010 11:35:42.885967   13085 config.go:182] Loaded profile config "running-upgrade-704000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:35:42.886009   13085 main.go:141] libmachine: Using SSH client type: native
	I1010 11:35:42.886100   13085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005d2480] 0x1005d4cc0 <nil>  [] 0s} localhost 53317 <nil> <nil>}
	I1010 11:35:42.886104   13085 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1010 11:35:42.944501   13085 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1010 11:35:42.944509   13085 buildroot.go:70] root file system type: tmpfs
	I1010 11:35:42.944560   13085 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1010 11:35:42.944616   13085 main.go:141] libmachine: Using SSH client type: native
	I1010 11:35:42.944706   13085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005d2480] 0x1005d4cc0 <nil>  [] 0s} localhost 53317 <nil> <nil>}
	I1010 11:35:42.944740   13085 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1010 11:35:43.012436   13085 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1010 11:35:43.012514   13085 main.go:141] libmachine: Using SSH client type: native
	I1010 11:35:43.012630   13085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005d2480] 0x1005d4cc0 <nil>  [] 0s} localhost 53317 <nil> <nil>}
	I1010 11:35:43.012639   13085 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1010 11:35:43.075798   13085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 11:35:43.075810   13085 machine.go:96] duration metric: took 562.043375ms to provisionDockerMachine
	I1010 11:35:43.075815   13085 start.go:293] postStartSetup for "running-upgrade-704000" (driver="qemu2")
	I1010 11:35:43.075821   13085 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 11:35:43.075892   13085 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 11:35:43.075900   13085 sshutil.go:53] new ssh client: &{IP:localhost Port:53317 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/running-upgrade-704000/id_rsa Username:docker}
	I1010 11:35:43.108115   13085 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 11:35:43.109433   13085 info.go:137] Remote host: Buildroot 2021.02.12
	I1010 11:35:43.109439   13085 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19787-10623/.minikube/addons for local assets ...
	I1010 11:35:43.109505   13085 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19787-10623/.minikube/files for local assets ...
	I1010 11:35:43.109590   13085 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem -> 111352.pem in /etc/ssl/certs
	I1010 11:35:43.109687   13085 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 11:35:43.112402   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem --> /etc/ssl/certs/111352.pem (1708 bytes)
	I1010 11:35:43.119181   13085 start.go:296] duration metric: took 43.360917ms for postStartSetup
	I1010 11:35:43.119195   13085 fix.go:56] duration metric: took 617.872917ms for fixHost
	I1010 11:35:43.119251   13085 main.go:141] libmachine: Using SSH client type: native
	I1010 11:35:43.119351   13085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005d2480] 0x1005d4cc0 <nil>  [] 0s} localhost 53317 <nil> <nil>}
	I1010 11:35:43.119356   13085 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 11:35:43.182707   13085 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728585342.953064389
	
	I1010 11:35:43.182717   13085 fix.go:216] guest clock: 1728585342.953064389
	I1010 11:35:43.182721   13085 fix.go:229] Guest: 2024-10-10 11:35:42.953064389 -0700 PDT Remote: 2024-10-10 11:35:43.119199 -0700 PDT m=+0.728680376 (delta=-166.134611ms)
	I1010 11:35:43.182735   13085 fix.go:200] guest clock delta is within tolerance: -166.134611ms
	I1010 11:35:43.182738   13085 start.go:83] releasing machines lock for "running-upgrade-704000", held for 681.42475ms
	I1010 11:35:43.182823   13085 ssh_runner.go:195] Run: cat /version.json
	I1010 11:35:43.182832   13085 sshutil.go:53] new ssh client: &{IP:localhost Port:53317 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/running-upgrade-704000/id_rsa Username:docker}
	I1010 11:35:43.182858   13085 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 11:35:43.182879   13085 sshutil.go:53] new ssh client: &{IP:localhost Port:53317 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/running-upgrade-704000/id_rsa Username:docker}
	W1010 11:35:43.183521   13085 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53317: connect: connection refused
	I1010 11:35:43.183543   13085 retry.go:31] will retry after 282.469387ms: dial tcp [::1]:53317: connect: connection refused
	W1010 11:35:43.508695   13085 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1010 11:35:43.508793   13085 ssh_runner.go:195] Run: systemctl --version
	I1010 11:35:43.511574   13085 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 11:35:43.513927   13085 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 11:35:43.513973   13085 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1010 11:35:43.518517   13085 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1010 11:35:43.524780   13085 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 11:35:43.524788   13085 start.go:495] detecting cgroup driver to use...
	I1010 11:35:43.524924   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 11:35:43.531232   13085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1010 11:35:43.535049   13085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1010 11:35:43.538634   13085 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1010 11:35:43.538674   13085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1010 11:35:43.542121   13085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1010 11:35:43.545402   13085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1010 11:35:43.548335   13085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1010 11:35:43.551438   13085 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 11:35:43.554749   13085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1010 11:35:43.557814   13085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1010 11:35:43.561016   13085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1010 11:35:43.563776   13085 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 11:35:43.566642   13085 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 11:35:43.569616   13085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:35:43.657494   13085 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1010 11:35:43.667872   13085 start.go:495] detecting cgroup driver to use...
	I1010 11:35:43.667972   13085 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1010 11:35:43.673829   13085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 11:35:43.678737   13085 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 11:35:43.684928   13085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 11:35:43.689275   13085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1010 11:35:43.693936   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 11:35:43.699643   13085 ssh_runner.go:195] Run: which cri-dockerd
	I1010 11:35:43.700934   13085 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1010 11:35:43.703470   13085 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1010 11:35:43.708644   13085 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1010 11:35:43.798611   13085 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1010 11:35:43.891121   13085 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1010 11:35:43.891187   13085 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1010 11:35:43.896470   13085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:35:43.992285   13085 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1010 11:35:57.391800   13085 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.399630542s)
	I1010 11:35:57.391882   13085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1010 11:35:57.397110   13085 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1010 11:35:57.404756   13085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1010 11:35:57.410001   13085 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1010 11:35:57.496826   13085 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1010 11:35:57.564430   13085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:35:57.642962   13085 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1010 11:35:57.649043   13085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1010 11:35:57.653617   13085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:35:57.712667   13085 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1010 11:35:57.751455   13085 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1010 11:35:57.751537   13085 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1010 11:35:57.754097   13085 start.go:563] Will wait 60s for crictl version
	I1010 11:35:57.754170   13085 ssh_runner.go:195] Run: which crictl
	I1010 11:35:57.755671   13085 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 11:35:57.767561   13085 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1010 11:35:57.767638   13085 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1010 11:35:57.781269   13085 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1010 11:35:57.797914   13085 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1010 11:35:57.798013   13085 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1010 11:35:57.799573   13085 kubeadm.go:883] updating cluster {Name:running-upgrade-704000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53349 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-704000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1010 11:35:57.799613   13085 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1010 11:35:57.799673   13085 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1010 11:35:57.810247   13085 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1010 11:35:57.810261   13085 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1010 11:35:57.810320   13085 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1010 11:35:57.813795   13085 ssh_runner.go:195] Run: which lz4
	I1010 11:35:57.815165   13085 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 11:35:57.816341   13085 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 11:35:57.816351   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1010 11:35:58.734444   13085 docker.go:649] duration metric: took 919.328958ms to copy over tarball
	I1010 11:35:58.734509   13085 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 11:36:00.582843   13085 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.848339625s)
	I1010 11:36:00.582862   13085 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 11:36:00.599279   13085 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1010 11:36:00.602662   13085 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1010 11:36:00.607961   13085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:36:00.684923   13085 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1010 11:36:01.019325   13085 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1010 11:36:01.030602   13085 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1010 11:36:01.030611   13085 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1010 11:36:01.030616   13085 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 11:36:01.036343   13085 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:36:01.038561   13085 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:36:01.039739   13085 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:36:01.039941   13085 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:36:01.041788   13085 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:36:01.041846   13085 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:36:01.043210   13085 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:36:01.043237   13085 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:36:01.044353   13085 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:36:01.044622   13085 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:36:01.045664   13085 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:36:01.045669   13085 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1010 11:36:01.046986   13085 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:36:01.046997   13085 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:36:01.047845   13085 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1010 11:36:01.048631   13085 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:36:01.518794   13085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:36:01.533201   13085 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1010 11:36:01.533230   13085 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:36:01.533280   13085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:36:01.545456   13085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1010 11:36:01.584137   13085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:36:01.595087   13085 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1010 11:36:01.595116   13085 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:36:01.595160   13085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:36:01.595164   13085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:36:01.612460   13085 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1010 11:36:01.612486   13085 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:36:01.612556   13085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:36:01.612656   13085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1010 11:36:01.623347   13085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1010 11:36:01.634375   13085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:36:01.644564   13085 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1010 11:36:01.644589   13085 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:36:01.644652   13085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:36:01.654822   13085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1010 11:36:01.718558   13085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1010 11:36:01.730438   13085 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1010 11:36:01.730460   13085 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:36:01.730526   13085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1010 11:36:01.740183   13085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1010 11:36:01.754870   13085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1010 11:36:01.766256   13085 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1010 11:36:01.766279   13085 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1010 11:36:01.766339   13085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1010 11:36:01.776354   13085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1010 11:36:01.776484   13085 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1010 11:36:01.779038   13085 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1010 11:36:01.779053   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1010 11:36:01.787591   13085 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1010 11:36:01.787600   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1010 11:36:01.814439   13085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1010 11:36:01.857573   13085 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1010 11:36:01.857748   13085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:36:01.869172   13085 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1010 11:36:01.869199   13085 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:36:01.869259   13085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:36:01.879750   13085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1010 11:36:01.879895   13085 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1010 11:36:01.881254   13085 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1010 11:36:01.881267   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1010 11:36:01.924079   13085 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1010 11:36:01.924094   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W1010 11:36:01.928875   13085 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1010 11:36:01.929006   13085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:36:01.964323   13085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1010 11:36:01.964349   13085 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1010 11:36:01.964375   13085 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:36:01.964432   13085 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:36:02.955704   13085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 11:36:02.956214   13085 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 11:36:02.961359   13085 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1010 11:36:02.961443   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1010 11:36:03.019134   13085 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 11:36:03.019155   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1010 11:36:03.302762   13085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 11:36:03.302807   13085 cache_images.go:92] duration metric: took 2.272207208s to LoadCachedImages
	W1010 11:36:03.302848   13085 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1010 11:36:03.302855   13085 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1010 11:36:03.302923   13085 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-704000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-704000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 11:36:03.303008   13085 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1010 11:36:03.323122   13085 cni.go:84] Creating CNI manager for ""
	I1010 11:36:03.323139   13085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:36:03.323144   13085 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 11:36:03.323153   13085 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-704000 NodeName:running-upgrade-704000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 11:36:03.323215   13085 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-704000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 11:36:03.323277   13085 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1010 11:36:03.334197   13085 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 11:36:03.334259   13085 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 11:36:03.337724   13085 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1010 11:36:03.343640   13085 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 11:36:03.353463   13085 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1010 11:36:03.362178   13085 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1010 11:36:03.363881   13085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:36:03.524900   13085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 11:36:03.541441   13085 certs.go:68] Setting up /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000 for IP: 10.0.2.15
	I1010 11:36:03.541466   13085 certs.go:194] generating shared ca certs ...
	I1010 11:36:03.541476   13085 certs.go:226] acquiring lock for ca certs: {Name:mk609fb55a881bb4c70c8ff17f366ce3ffd355cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:36:03.541738   13085 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.key
	I1010 11:36:03.541800   13085 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/proxy-client-ca.key
	I1010 11:36:03.541805   13085 certs.go:256] generating profile certs ...
	I1010 11:36:03.541891   13085 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/client.key
	I1010 11:36:03.541903   13085 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.key.46e4f32a
	I1010 11:36:03.541914   13085 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.crt.46e4f32a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1010 11:36:03.717112   13085 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.crt.46e4f32a ...
	I1010 11:36:03.717129   13085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.crt.46e4f32a: {Name:mkb9161f8d589ecd6282e50dc11bbb4e64422ace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:36:03.717442   13085 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.key.46e4f32a ...
	I1010 11:36:03.717452   13085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.key.46e4f32a: {Name:mk82e632f2c9a65a2552a995b9a133311d9aa7da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:36:03.717614   13085 certs.go:381] copying /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.crt.46e4f32a -> /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.crt
	I1010 11:36:03.717751   13085 certs.go:385] copying /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.key.46e4f32a -> /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.key
	I1010 11:36:03.717919   13085 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/proxy-client.key
	I1010 11:36:03.718077   13085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/11135.pem (1338 bytes)
	W1010 11:36:03.718113   13085 certs.go:480] ignoring /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/11135_empty.pem, impossibly tiny 0 bytes
	I1010 11:36:03.718119   13085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 11:36:03.718151   13085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem (1082 bytes)
	I1010 11:36:03.718182   13085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem (1123 bytes)
	I1010 11:36:03.718213   13085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/key.pem (1675 bytes)
	I1010 11:36:03.718275   13085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem (1708 bytes)
	I1010 11:36:03.718755   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 11:36:03.727006   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1010 11:36:03.747775   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 11:36:03.771837   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 11:36:03.792559   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 11:36:03.808391   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 11:36:03.822422   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 11:36:03.840861   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 11:36:03.857521   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem --> /usr/share/ca-certificates/111352.pem (1708 bytes)
	I1010 11:36:03.867404   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 11:36:03.877694   13085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/11135.pem --> /usr/share/ca-certificates/11135.pem (1338 bytes)
	I1010 11:36:03.888951   13085 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 11:36:03.913536   13085 ssh_runner.go:195] Run: openssl version
	I1010 11:36:03.916213   13085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111352.pem && ln -fs /usr/share/ca-certificates/111352.pem /etc/ssl/certs/111352.pem"
	I1010 11:36:03.923330   13085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111352.pem
	I1010 11:36:03.935050   13085 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:23 /usr/share/ca-certificates/111352.pem
	I1010 11:36:03.935107   13085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111352.pem
	I1010 11:36:03.940641   13085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111352.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 11:36:03.943649   13085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 11:36:03.951155   13085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 11:36:03.953069   13085 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 18:35 /usr/share/ca-certificates/minikubeCA.pem
	I1010 11:36:03.953097   13085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 11:36:03.955329   13085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 11:36:03.958894   13085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11135.pem && ln -fs /usr/share/ca-certificates/11135.pem /etc/ssl/certs/11135.pem"
	I1010 11:36:03.966893   13085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11135.pem
	I1010 11:36:03.970200   13085 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:23 /usr/share/ca-certificates/11135.pem
	I1010 11:36:03.970243   13085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11135.pem
	I1010 11:36:03.973881   13085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11135.pem /etc/ssl/certs/51391683.0"
	I1010 11:36:03.977359   13085 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 11:36:03.984022   13085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 11:36:03.985745   13085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 11:36:03.987491   13085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 11:36:03.995627   13085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 11:36:03.997480   13085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 11:36:03.999422   13085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 11:36:04.003622   13085 kubeadm.go:392] StartCluster: {Name:running-upgrade-704000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53349 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-704000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1010 11:36:04.003705   13085 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1010 11:36:04.037980   13085 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 11:36:04.041666   13085 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 11:36:04.041676   13085 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 11:36:04.041706   13085 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 11:36:04.044608   13085 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 11:36:04.044648   13085 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-704000" does not appear in /Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:36:04.044663   13085 kubeconfig.go:62] /Users/jenkins/minikube-integration/19787-10623/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-704000" cluster setting kubeconfig missing "running-upgrade-704000" context setting]
	I1010 11:36:04.044833   13085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/kubeconfig: {Name:mk76f18909e94718c05e51991d3ea4660849ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:36:04.045507   13085 kapi.go:59] client config for running-upgrade-704000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/client.key", CAFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10202aa80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 11:36:04.046447   13085 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 11:36:04.049119   13085 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-704000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1010 11:36:04.049126   13085 kubeadm.go:1160] stopping kube-system containers ...
	I1010 11:36:04.049175   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1010 11:36:04.078042   13085 docker.go:483] Stopping containers: [fe7dc23baec1 c69c376a22ae e72a4aca3378 93056d740505 1182736d7364 137dbec3d7d7 c65ca2b2565d 1ce2222635bd 0d503d26e7de 9cd01148acde d6bf834c06f7 7b22028f10a5 d00d7eacaa47 c3107f1b6b3e 2ae97bb83024 e3c8b8559da2 33cbaf980207 f26ed846ba3d 37284668955d 15514405a7d5 ee4dad8d3ae9 dc751061de23 10cc42d75586]
	I1010 11:36:04.078135   13085 ssh_runner.go:195] Run: docker stop fe7dc23baec1 c69c376a22ae e72a4aca3378 93056d740505 1182736d7364 137dbec3d7d7 c65ca2b2565d 1ce2222635bd 0d503d26e7de 9cd01148acde d6bf834c06f7 7b22028f10a5 d00d7eacaa47 c3107f1b6b3e 2ae97bb83024 e3c8b8559da2 33cbaf980207 f26ed846ba3d 37284668955d 15514405a7d5 ee4dad8d3ae9 dc751061de23 10cc42d75586
	I1010 11:36:04.702646   13085 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 11:36:04.776530   13085 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 11:36:04.780372   13085 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct 10 18:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Oct 10 18:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 10 18:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 10 18:35 /etc/kubernetes/scheduler.conf
	
	I1010 11:36:04.780418   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/admin.conf
	I1010 11:36:04.786344   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1010 11:36:04.786384   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 11:36:04.793000   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/kubelet.conf
	I1010 11:36:04.802415   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1010 11:36:04.802485   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 11:36:04.805597   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/controller-manager.conf
	I1010 11:36:04.808587   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1010 11:36:04.808629   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 11:36:04.812683   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/scheduler.conf
	I1010 11:36:04.816187   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1010 11:36:04.816232   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 11:36:04.819110   13085 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 11:36:04.826966   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:36:04.851332   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:36:05.344605   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:36:05.553459   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:36:05.577691   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:36:05.598338   13085 api_server.go:52] waiting for apiserver process to appear ...
	I1010 11:36:05.598424   13085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:36:06.100614   13085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:36:06.600532   13085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:36:07.100440   13085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:36:07.105202   13085 api_server.go:72] duration metric: took 1.50688125s to wait for apiserver process to appear ...
	I1010 11:36:07.105211   13085 api_server.go:88] waiting for apiserver healthz status ...
	I1010 11:36:07.105239   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:36:12.107445   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:36:12.107538   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:36:17.108382   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:36:17.108460   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:36:22.109177   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:36:22.109235   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:36:27.110192   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:36:27.110272   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:36:32.111662   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:36:32.111710   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:36:37.113237   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:36:37.113337   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:36:42.115569   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:36:42.115617   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:36:47.118009   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:36:47.118104   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:36:52.120623   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:36:52.120646   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:36:57.122962   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:36:57.123043   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:37:02.125700   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:37:02.125787   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:37:07.128339   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:37:07.128722   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:37:07.168160   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:37:07.168350   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:37:07.190379   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:37:07.190514   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:37:07.207018   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:37:07.207099   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:37:07.219031   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:37:07.219106   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:37:07.229683   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:37:07.229753   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:37:07.240206   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:37:07.240297   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:37:07.249905   13085 logs.go:282] 0 containers: []
	W1010 11:37:07.249916   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:37:07.249974   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:37:07.260189   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:37:07.260206   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:37:07.260211   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:37:07.270843   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:37:07.270855   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:37:07.282157   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:37:07.282168   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:37:07.308341   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:37:07.308348   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:37:07.322278   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:37:07.322288   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:37:07.338344   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:37:07.338353   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:37:07.349792   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:37:07.349805   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:37:07.363067   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:37:07.363078   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:37:07.400489   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:37:07.400497   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:37:07.476373   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:37:07.476385   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:37:07.488650   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:37:07.488661   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:37:07.502284   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:37:07.502293   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:37:07.514148   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:37:07.514160   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:37:07.518389   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:37:07.518395   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:37:07.529597   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:37:07.529609   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:37:07.541046   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:37:07.541056   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:37:10.060194   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:37:15.061277   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:37:15.061741   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:37:15.105468   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:37:15.105638   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:37:15.125870   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:37:15.126013   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:37:15.142007   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:37:15.142089   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:37:15.154180   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:37:15.154257   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:37:15.164797   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:37:15.164870   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:37:15.175728   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:37:15.175818   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:37:15.185262   13085 logs.go:282] 0 containers: []
	W1010 11:37:15.185274   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:37:15.185339   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:37:15.195529   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:37:15.195549   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:37:15.195554   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:37:15.207010   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:37:15.207020   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:37:15.224008   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:37:15.224018   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:37:15.247157   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:37:15.247169   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:37:15.260430   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:37:15.260441   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:37:15.272542   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:37:15.272554   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:37:15.309852   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:37:15.309861   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:37:15.321920   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:37:15.321931   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:37:15.332849   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:37:15.332863   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:37:15.344505   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:37:15.344516   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:37:15.360285   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:37:15.360295   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:37:15.371839   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:37:15.371849   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:37:15.385489   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:37:15.385500   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:37:15.397166   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:37:15.397176   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:37:15.423437   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:37:15.423450   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:37:15.428190   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:37:15.428196   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:37:17.967796   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:37:22.970358   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:37:22.970880   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:37:23.009604   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:37:23.009760   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:37:23.030295   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:37:23.030408   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:37:23.044805   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:37:23.044877   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:37:23.056891   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:37:23.056975   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:37:23.067909   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:37:23.067978   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:37:23.078884   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:37:23.078951   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:37:23.089471   13085 logs.go:282] 0 containers: []
	W1010 11:37:23.089485   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:37:23.089538   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:37:23.099648   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:37:23.099666   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:37:23.099673   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:37:23.139030   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:37:23.139041   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:37:23.150372   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:37:23.150382   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:37:23.176288   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:37:23.176296   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:37:23.193105   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:37:23.193115   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:37:23.232525   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:37:23.232533   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:37:23.246093   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:37:23.246104   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:37:23.259854   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:37:23.259864   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:37:23.271178   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:37:23.271194   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:37:23.283334   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:37:23.283346   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:37:23.295025   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:37:23.295037   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:37:23.307044   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:37:23.307055   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:37:23.320057   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:37:23.320068   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:37:23.330937   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:37:23.330948   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:37:23.346984   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:37:23.346993   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:37:23.351529   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:37:23.351535   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:37:25.865370   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:37:30.867329   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:37:30.867543   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:37:30.886653   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:37:30.886767   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:37:30.900033   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:37:30.900115   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:37:30.911594   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:37:30.911687   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:37:30.921920   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:37:30.922010   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:37:30.932701   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:37:30.932774   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:37:30.943408   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:37:30.943496   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:37:30.953422   13085 logs.go:282] 0 containers: []
	W1010 11:37:30.953432   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:37:30.953506   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:37:30.964160   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:37:30.964177   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:37:30.964183   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:37:30.980365   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:37:30.980377   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:37:30.991502   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:37:30.991514   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:37:31.002322   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:37:31.002332   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:37:31.006830   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:37:31.006835   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:37:31.018065   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:37:31.018077   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:37:31.029705   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:37:31.029715   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:37:31.044070   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:37:31.044079   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:37:31.060181   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:37:31.060191   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:37:31.079045   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:37:31.079055   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:37:31.104967   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:37:31.104976   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:37:31.120901   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:37:31.120912   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:37:31.133363   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:37:31.133374   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:37:31.172728   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:37:31.172737   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:37:31.206703   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:37:31.206712   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:37:31.220443   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:37:31.220451   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:37:33.735581   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:37:38.738321   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:37:38.738698   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:37:38.772702   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:37:38.772828   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:37:38.792181   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:37:38.792285   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:37:38.806383   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:37:38.806455   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:37:38.818203   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:37:38.818282   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:37:38.833409   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:37:38.833503   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:37:38.845061   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:37:38.845145   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:37:38.855085   13085 logs.go:282] 0 containers: []
	W1010 11:37:38.855097   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:37:38.855157   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:37:38.865591   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:37:38.865608   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:37:38.865615   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:37:38.900089   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:37:38.900099   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:37:38.913986   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:37:38.913996   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:37:38.919076   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:37:38.919083   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:37:38.935072   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:37:38.935083   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:37:38.946554   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:37:38.946563   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:37:38.958428   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:37:38.958441   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:37:38.985376   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:37:38.985385   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:37:38.998846   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:37:38.998857   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:37:39.012997   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:37:39.013008   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:37:39.026207   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:37:39.026217   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:37:39.038277   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:37:39.038287   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:37:39.061716   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:37:39.061726   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:37:39.072643   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:37:39.072656   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:37:39.083727   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:37:39.083737   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:37:39.095426   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:37:39.095438   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:37:41.636625   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:37:46.639302   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:37:46.639711   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:37:46.679841   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:37:46.679983   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:37:46.701584   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:37:46.701737   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:37:46.716943   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:37:46.717045   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:37:46.729135   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:37:46.729239   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:37:46.739822   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:37:46.739912   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:37:46.751298   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:37:46.751372   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:37:46.761567   13085 logs.go:282] 0 containers: []
	W1010 11:37:46.761578   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:37:46.761672   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:37:46.773331   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:37:46.773346   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:37:46.773351   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:37:46.784696   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:37:46.784709   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:37:46.788886   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:37:46.788892   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:37:46.802616   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:37:46.802627   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:37:46.819513   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:37:46.819523   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:37:46.830956   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:37:46.830966   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:37:46.842409   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:37:46.842418   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:37:46.867338   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:37:46.867346   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:37:46.879634   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:37:46.879644   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:37:46.891154   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:37:46.891163   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:37:46.931126   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:37:46.931134   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:37:46.944317   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:37:46.944327   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:37:46.960609   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:37:46.960620   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:37:46.974685   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:37:46.974696   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:37:46.992385   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:37:46.992395   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:37:47.027501   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:37:47.027512   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:37:49.541593   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:37:54.544454   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:37:54.544858   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:37:54.578811   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:37:54.578957   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:37:54.601261   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:37:54.601414   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:37:54.617419   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:37:54.617505   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:37:54.630505   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:37:54.630584   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:37:54.641090   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:37:54.641162   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:37:54.651616   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:37:54.651695   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:37:54.661646   13085 logs.go:282] 0 containers: []
	W1010 11:37:54.661657   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:37:54.661723   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:37:54.671835   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:37:54.671852   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:37:54.671857   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:37:54.705658   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:37:54.705669   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:37:54.718822   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:37:54.718830   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:37:54.730043   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:37:54.730055   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:37:54.741408   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:37:54.741419   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:37:54.745647   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:37:54.745655   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:37:54.759964   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:37:54.759975   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:37:54.771417   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:37:54.771428   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:37:54.783387   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:37:54.783397   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:37:54.823495   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:37:54.823504   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:37:54.837716   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:37:54.837727   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:37:54.849511   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:37:54.849521   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:37:54.868104   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:37:54.868114   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:37:54.885906   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:37:54.885918   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:37:54.913084   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:37:54.913095   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:37:54.941898   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:37:54.941912   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:37:57.456219   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:38:02.458635   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:38:02.458794   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:38:02.475604   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:38:02.475693   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:38:02.488880   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:38:02.488973   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:38:02.499932   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:38:02.500006   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:38:02.510556   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:38:02.510637   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:38:02.521364   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:38:02.521437   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:38:02.532459   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:38:02.532550   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:38:02.542718   13085 logs.go:282] 0 containers: []
	W1010 11:38:02.542729   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:38:02.542795   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:38:02.553323   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:38:02.553342   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:38:02.553347   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:38:02.564990   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:38:02.565001   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:38:02.578436   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:38:02.578445   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:38:02.595205   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:38:02.595214   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:38:02.620689   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:38:02.620699   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:38:02.632492   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:38:02.632504   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:38:02.636992   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:38:02.636998   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:38:02.650877   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:38:02.650888   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:38:02.662283   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:38:02.662296   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:38:02.674195   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:38:02.674205   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:38:02.685688   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:38:02.685699   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:38:02.725250   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:38:02.725262   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:38:02.761006   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:38:02.761018   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:38:02.775654   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:38:02.775666   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:38:02.787488   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:38:02.787500   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:38:02.805676   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:38:02.805686   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:38:05.320015   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:38:10.322737   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:38:10.322870   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:38:10.334333   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:38:10.334434   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:38:10.345676   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:38:10.345772   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:38:10.356961   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:38:10.357050   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:38:10.368219   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:38:10.368306   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:38:10.379299   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:38:10.379388   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:38:10.390715   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:38:10.390803   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:38:10.401499   13085 logs.go:282] 0 containers: []
	W1010 11:38:10.401510   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:38:10.401572   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:38:10.413988   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:38:10.414006   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:38:10.414011   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:38:10.427756   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:38:10.427766   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:38:10.445135   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:38:10.445145   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:38:10.458877   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:38:10.458888   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:38:10.463877   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:38:10.463883   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:38:10.476933   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:38:10.476944   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:38:10.491400   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:38:10.491410   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:38:10.506455   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:38:10.506466   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:38:10.518353   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:38:10.518364   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:38:10.547367   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:38:10.547378   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:38:10.559852   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:38:10.559863   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:38:10.597715   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:38:10.597726   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:38:10.609342   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:38:10.609351   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:38:10.621185   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:38:10.621196   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:38:10.639336   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:38:10.639347   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:38:10.651107   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:38:10.651120   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:38:13.192590   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:38:18.194788   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:38:18.194962   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:38:18.208324   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:38:18.208423   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:38:18.219544   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:38:18.219626   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:38:18.230688   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:38:18.230776   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:38:18.241519   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:38:18.241602   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:38:18.252307   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:38:18.252378   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:38:18.263133   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:38:18.263206   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:38:18.273528   13085 logs.go:282] 0 containers: []
	W1010 11:38:18.273540   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:38:18.273599   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:38:18.284074   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:38:18.284094   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:38:18.284099   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:38:18.296787   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:38:18.296799   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:38:18.310489   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:38:18.310501   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:38:18.346503   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:38:18.346515   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:38:18.358466   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:38:18.358477   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:38:18.375158   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:38:18.375168   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:38:18.394088   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:38:18.394100   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:38:18.405583   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:38:18.405596   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:38:18.420794   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:38:18.420805   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:38:18.432854   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:38:18.432865   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:38:18.444490   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:38:18.444502   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:38:18.456794   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:38:18.456807   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:38:18.479953   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:38:18.479964   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:38:18.484604   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:38:18.484612   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:38:18.501948   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:38:18.501959   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:38:18.528387   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:38:18.528397   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:38:21.072715   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:38:26.073286   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:38:26.073392   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:38:26.085911   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:38:26.085995   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:38:26.097960   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:38:26.098037   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:38:26.109590   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:38:26.109671   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:38:26.121654   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:38:26.121732   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:38:26.133596   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:38:26.133679   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:38:26.144742   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:38:26.144821   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:38:26.156042   13085 logs.go:282] 0 containers: []
	W1010 11:38:26.156054   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:38:26.156119   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:38:26.168622   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:38:26.168638   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:38:26.168644   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:38:26.182109   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:38:26.182122   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:38:26.196015   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:38:26.196030   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:38:26.201196   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:38:26.201208   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:38:26.239833   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:38:26.239845   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:38:26.260429   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:38:26.260441   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:38:26.279038   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:38:26.279052   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:38:26.294134   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:38:26.294147   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:38:26.308741   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:38:26.308754   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:38:26.350763   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:38:26.350780   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:38:26.363775   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:38:26.363787   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:38:26.376089   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:38:26.376101   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:38:26.392887   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:38:26.392899   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:38:26.410371   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:38:26.410384   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:38:26.423813   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:38:26.423826   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:38:26.436867   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:38:26.436878   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:38:28.964893   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:38:33.967542   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:38:33.967745   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:38:33.978909   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:38:33.979014   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:38:33.990407   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:38:33.990496   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:38:34.001638   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:38:34.001718   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:38:34.012930   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:38:34.013013   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:38:34.024243   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:38:34.024319   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:38:34.035056   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:38:34.035138   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:38:34.045968   13085 logs.go:282] 0 containers: []
	W1010 11:38:34.045980   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:38:34.046045   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:38:34.056573   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:38:34.056592   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:38:34.056599   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:38:34.061320   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:38:34.061331   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:38:34.101201   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:38:34.101213   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:38:34.114474   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:38:34.114486   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:38:34.127044   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:38:34.127057   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:38:34.151871   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:38:34.151880   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:38:34.163896   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:38:34.163908   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:38:34.204291   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:38:34.204301   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:38:34.224880   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:38:34.224890   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:38:34.236563   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:38:34.236575   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:38:34.248252   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:38:34.248263   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:38:34.265271   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:38:34.265281   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:38:34.278675   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:38:34.278686   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:38:34.297421   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:38:34.297436   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:38:34.309867   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:38:34.309881   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:38:34.325122   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:38:34.325135   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:38:36.840615   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:38:41.842997   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:38:41.843504   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:38:41.882912   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:38:41.883066   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:38:41.904805   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:38:41.904924   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:38:41.922508   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:38:41.922600   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:38:41.934463   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:38:41.934540   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:38:41.945231   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:38:41.945311   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:38:41.955625   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:38:41.955718   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:38:41.965935   13085 logs.go:282] 0 containers: []
	W1010 11:38:41.965946   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:38:41.966021   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:38:41.984318   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:38:41.984336   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:38:41.984342   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:38:41.995992   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:38:41.996003   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:38:42.021123   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:38:42.021133   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:38:42.032669   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:38:42.032683   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:38:42.044268   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:38:42.044281   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:38:42.082310   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:38:42.082323   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:38:42.096210   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:38:42.096221   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:38:42.109902   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:38:42.109915   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:38:42.122903   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:38:42.122916   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:38:42.141173   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:38:42.141189   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:38:42.153647   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:38:42.153659   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:38:42.172564   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:38:42.172578   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:38:42.185586   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:38:42.185598   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:38:42.190930   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:38:42.190943   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:38:42.231563   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:38:42.231580   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:38:42.247526   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:38:42.247539   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:38:44.762673   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:38:49.764980   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:38:49.765452   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:38:49.804604   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:38:49.804754   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:38:49.826273   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:38:49.826393   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:38:49.841500   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:38:49.841591   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:38:49.854008   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:38:49.854092   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:38:49.864771   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:38:49.864855   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:38:49.874932   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:38:49.875012   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:38:49.885438   13085 logs.go:282] 0 containers: []
	W1010 11:38:49.885449   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:38:49.885515   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:38:49.896186   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:38:49.896204   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:38:49.896210   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:38:49.933415   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:38:49.933423   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:38:49.945433   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:38:49.945443   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:38:49.959266   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:38:49.959275   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:38:49.978074   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:38:49.978084   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:38:50.003144   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:38:50.003152   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:38:50.015199   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:38:50.015210   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:38:50.026540   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:38:50.026552   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:38:50.037437   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:38:50.037449   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:38:50.048774   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:38:50.048784   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:38:50.067287   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:38:50.067297   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:38:50.078610   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:38:50.078626   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:38:50.093798   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:38:50.093808   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:38:50.098693   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:38:50.098700   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:38:50.133855   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:38:50.133866   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:38:50.150093   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:38:50.150103   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:38:52.665412   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:38:57.665817   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:38:57.665969   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:38:57.680209   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:38:57.680289   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:38:57.691625   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:38:57.691700   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:38:57.702635   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:38:57.702721   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:38:57.714760   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:38:57.714872   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:38:57.726589   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:38:57.726679   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:38:57.738257   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:38:57.738342   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:38:57.755706   13085 logs.go:282] 0 containers: []
	W1010 11:38:57.755725   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:38:57.755801   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:38:57.766748   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:38:57.766767   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:38:57.766772   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:38:57.785576   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:38:57.785587   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:38:57.830143   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:38:57.830160   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:38:57.835543   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:38:57.835554   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:38:57.874339   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:38:57.874351   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:38:57.889581   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:38:57.889593   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:38:57.904219   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:38:57.904231   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:38:57.916055   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:38:57.916071   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:38:57.933457   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:38:57.933470   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:38:57.945502   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:38:57.945514   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:38:57.959915   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:38:57.959926   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:38:57.972316   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:38:57.972332   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:38:57.997342   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:38:57.997356   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:38:58.009596   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:38:58.009608   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:38:58.022283   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:38:58.022293   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:38:58.034726   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:38:58.034737   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:00.548494   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:05.550757   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:05.550870   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:05.561739   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:05.561816   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:05.572432   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:05.572526   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:05.584450   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:05.584521   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:05.596805   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:05.596898   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:05.610136   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:05.610230   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:05.620522   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:05.620609   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:05.630621   13085 logs.go:282] 0 containers: []
	W1010 11:39:05.630636   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:05.630704   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:05.641798   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:05.641816   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:05.641821   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:05.682560   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:05.682587   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:05.727869   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:05.727880   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:05.739687   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:05.739698   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:05.755960   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:05.755971   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:05.766983   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:05.766994   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:05.778746   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:05.778757   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:05.783226   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:05.783232   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:05.796604   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:05.796616   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:05.821327   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:05.821336   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:05.833729   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:05.833740   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:05.848462   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:05.848473   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:05.861628   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:05.861638   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:05.879645   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:05.879655   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:05.891136   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:05.891147   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:05.904730   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:05.904741   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:08.418302   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:13.420517   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:13.420804   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:13.447798   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:13.447954   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:13.464775   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:13.464883   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:13.478000   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:13.478084   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:13.489227   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:13.489292   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:13.499448   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:13.499520   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:13.509916   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:13.509998   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:13.519889   13085 logs.go:282] 0 containers: []
	W1010 11:39:13.519904   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:13.519960   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:13.532418   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:13.532445   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:13.532451   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:13.570789   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:13.570796   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:13.609404   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:13.609414   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:13.627389   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:13.627400   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:13.645451   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:13.645462   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:13.659486   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:13.659497   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:13.671056   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:13.671071   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:13.687520   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:13.687532   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:13.702827   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:13.702839   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:13.728138   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:13.728147   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:13.739524   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:13.739534   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:13.744001   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:13.744007   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:13.757565   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:13.757576   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:13.768513   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:13.768523   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:13.783503   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:13.783514   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:13.795213   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:13.795224   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:16.308537   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:21.310945   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:21.311054   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:21.323203   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:21.323288   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:21.334697   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:21.334775   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:21.345911   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:21.345976   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:21.356689   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:21.356755   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:21.368948   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:21.369024   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:21.381020   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:21.381103   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:21.402578   13085 logs.go:282] 0 containers: []
	W1010 11:39:21.402592   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:21.402662   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:21.415566   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:21.415595   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:21.415601   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:21.456905   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:21.456922   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:21.469751   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:21.469765   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:21.482284   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:21.482296   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:21.496976   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:21.496989   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:21.510268   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:21.510281   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:21.532590   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:21.532604   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:21.545787   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:21.545804   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:21.564968   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:21.564978   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:21.581236   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:21.581249   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:21.603212   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:21.603226   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:21.616377   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:21.616392   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:21.621370   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:21.621383   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:21.662230   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:21.662244   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:21.677392   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:21.677405   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:21.704289   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:21.704303   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:24.219659   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:29.222202   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:29.222328   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:29.233803   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:29.233907   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:29.246720   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:29.246824   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:29.263337   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:29.263428   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:29.274054   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:29.274150   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:29.284833   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:29.284908   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:29.296300   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:29.296379   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:29.306982   13085 logs.go:282] 0 containers: []
	W1010 11:39:29.306994   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:29.307061   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:29.318997   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:29.319018   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:29.319023   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:29.323862   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:29.323869   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:29.337808   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:29.337820   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:29.373372   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:29.373384   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:29.385894   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:29.385906   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:29.401753   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:29.401765   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:29.413806   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:29.413823   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:29.431914   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:29.431924   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:29.443695   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:29.443707   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:29.455039   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:29.455049   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:29.468686   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:29.468695   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:29.482475   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:29.482484   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:29.493913   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:29.493925   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:29.533578   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:29.533587   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:29.544850   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:29.544859   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:29.568129   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:29.568138   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:32.094860   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:37.097181   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:37.097522   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:37.139183   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:37.139358   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:37.166793   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:37.166900   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:37.179911   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:37.180000   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:37.190774   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:37.190868   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:37.205422   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:37.205501   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:37.217356   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:37.217430   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:37.228881   13085 logs.go:282] 0 containers: []
	W1010 11:39:37.228894   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:37.228966   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:37.239245   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:37.239263   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:37.239268   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:37.243875   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:37.243880   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:37.255536   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:37.255548   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:37.266500   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:37.266511   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:37.279788   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:37.279799   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:37.294053   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:37.294064   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:37.314490   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:37.314500   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:37.331359   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:37.331370   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:37.343534   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:37.343544   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:37.384211   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:37.384221   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:37.423926   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:37.423938   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:37.435642   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:37.435656   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:37.457413   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:37.457424   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:37.471072   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:37.471083   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:37.482354   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:37.482366   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:37.493315   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:37.493326   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:40.019862   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:45.022095   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:45.022386   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:45.051897   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:45.052061   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:45.069890   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:45.069993   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:45.083694   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:45.083789   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:45.095995   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:45.096072   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:45.106932   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:45.107004   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:45.117576   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:45.117679   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:45.127688   13085 logs.go:282] 0 containers: []
	W1010 11:39:45.127701   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:45.127765   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:45.143503   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:45.143520   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:45.143526   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:45.180166   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:45.180177   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:45.197235   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:45.197249   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:45.210985   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:45.210999   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:45.228689   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:45.228700   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:45.240272   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:45.240283   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:45.266475   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:45.266489   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:45.282112   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:45.282124   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:45.296410   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:45.296422   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:45.309838   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:45.309850   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:45.323694   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:45.323709   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:45.337922   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:45.337933   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:45.352986   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:45.353003   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:45.366354   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:45.366366   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:45.404949   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:45.404963   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:45.410125   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:45.410135   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:47.924347   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:52.926590   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:52.926896   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:52.953438   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:52.953577   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:52.971212   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:52.971306   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:52.988884   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:52.988965   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:52.999516   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:52.999611   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:53.009766   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:53.009859   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:53.020611   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:53.020702   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:53.031073   13085 logs.go:282] 0 containers: []
	W1010 11:39:53.031088   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:53.031161   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:53.045357   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:53.045378   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:53.045383   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:53.057993   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:53.058005   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:53.076616   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:53.076627   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:53.088936   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:53.088948   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:53.100601   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:53.100612   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:53.113949   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:53.113958   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:53.125046   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:53.125058   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:53.142071   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:53.142082   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:53.175848   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:53.175858   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:53.190300   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:53.190311   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:53.205300   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:53.205311   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:53.219296   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:53.219309   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:53.231722   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:53.231734   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:53.274544   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:53.274555   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:53.278633   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:53.278639   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:53.289475   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:53.289486   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:55.815825   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:00.817982   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:00.818117   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:40:00.832080   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:40:00.832168   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:40:00.843325   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:40:00.843401   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:40:00.859393   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:40:00.859459   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:40:00.870556   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:40:00.870642   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:40:00.881032   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:40:00.881115   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:40:00.892425   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:40:00.892514   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:40:00.903104   13085 logs.go:282] 0 containers: []
	W1010 11:40:00.903119   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:40:00.903202   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:40:00.915573   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:40:00.915592   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:40:00.915598   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:40:00.927847   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:40:00.927857   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:40:00.939284   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:40:00.939294   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:40:00.950346   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:40:00.950360   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:40:00.990734   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:40:00.990744   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:40:01.027494   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:40:01.027505   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:40:01.042723   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:40:01.042738   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:40:01.054421   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:40:01.054431   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:40:01.066237   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:40:01.066251   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:40:01.084905   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:40:01.084915   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:40:01.110196   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:40:01.110208   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:40:01.134160   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:40:01.134171   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:40:01.147772   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:40:01.147783   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:40:01.152242   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:40:01.152249   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:40:01.166384   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:40:01.166396   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:40:01.179492   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:40:01.179503   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:40:03.705639   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:08.707949   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:08.708153   13085 kubeadm.go:597] duration metric: took 4m4.668867291s to restartPrimaryControlPlane
	W1010 11:40:08.708317   13085 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 11:40:08.708385   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1010 11:40:09.695146   13085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 11:40:09.700023   13085 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 11:40:09.702819   13085 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 11:40:09.705443   13085 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 11:40:09.705449   13085 kubeadm.go:157] found existing configuration files:
	
	I1010 11:40:09.705481   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/admin.conf
	I1010 11:40:09.708350   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 11:40:09.708380   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 11:40:09.710860   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/kubelet.conf
	I1010 11:40:09.713342   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 11:40:09.713390   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 11:40:09.716551   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/controller-manager.conf
	I1010 11:40:09.719146   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 11:40:09.719189   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 11:40:09.721605   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/scheduler.conf
	I1010 11:40:09.724484   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 11:40:09.724519   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 11:40:09.727255   13085 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 11:40:09.743767   13085 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1010 11:40:09.743797   13085 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 11:40:09.791762   13085 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 11:40:09.791828   13085 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 11:40:09.791888   13085 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 11:40:09.841059   13085 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 11:40:09.846321   13085 out.go:235]   - Generating certificates and keys ...
	I1010 11:40:09.846355   13085 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 11:40:09.846389   13085 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 11:40:09.846427   13085 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 11:40:09.846455   13085 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 11:40:09.846489   13085 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 11:40:09.846515   13085 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 11:40:09.846544   13085 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 11:40:09.846573   13085 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 11:40:09.846608   13085 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 11:40:09.846654   13085 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 11:40:09.846671   13085 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 11:40:09.846697   13085 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 11:40:09.963400   13085 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 11:40:10.198887   13085 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 11:40:10.297357   13085 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 11:40:10.440510   13085 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 11:40:10.467812   13085 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 11:40:10.468197   13085 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 11:40:10.468272   13085 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 11:40:10.546868   13085 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 11:40:10.551580   13085 out.go:235]   - Booting up control plane ...
	I1010 11:40:10.551629   13085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 11:40:10.551674   13085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 11:40:10.551728   13085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 11:40:10.551813   13085 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 11:40:10.551929   13085 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 11:40:15.055919   13085 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503902 seconds
	I1010 11:40:15.056013   13085 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 11:40:15.061582   13085 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 11:40:15.569301   13085 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 11:40:15.569410   13085 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-704000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 11:40:16.073143   13085 kubeadm.go:310] [bootstrap-token] Using token: 8iuhps.egjej8sdpgu4s4u9
	I1010 11:40:16.076757   13085 out.go:235]   - Configuring RBAC rules ...
	I1010 11:40:16.076810   13085 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 11:40:16.076857   13085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 11:40:16.080344   13085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 11:40:16.081153   13085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 11:40:16.082073   13085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 11:40:16.082957   13085 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 11:40:16.086412   13085 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 11:40:16.267960   13085 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 11:40:16.477472   13085 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 11:40:16.477926   13085 kubeadm.go:310] 
	I1010 11:40:16.477958   13085 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 11:40:16.477963   13085 kubeadm.go:310] 
	I1010 11:40:16.478001   13085 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 11:40:16.478051   13085 kubeadm.go:310] 
	I1010 11:40:16.478126   13085 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 11:40:16.478240   13085 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 11:40:16.478271   13085 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 11:40:16.478274   13085 kubeadm.go:310] 
	I1010 11:40:16.478351   13085 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 11:40:16.478355   13085 kubeadm.go:310] 
	I1010 11:40:16.478377   13085 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 11:40:16.478379   13085 kubeadm.go:310] 
	I1010 11:40:16.478404   13085 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 11:40:16.478536   13085 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 11:40:16.478641   13085 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 11:40:16.478649   13085 kubeadm.go:310] 
	I1010 11:40:16.478745   13085 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 11:40:16.478781   13085 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 11:40:16.478783   13085 kubeadm.go:310] 
	I1010 11:40:16.478891   13085 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8iuhps.egjej8sdpgu4s4u9 \
	I1010 11:40:16.478940   13085 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79bae548dd16cfe936348b462da3f6d7ee9037c9333f736d9e5d628396c7b6e1 \
	I1010 11:40:16.478953   13085 kubeadm.go:310] 	--control-plane 
	I1010 11:40:16.478956   13085 kubeadm.go:310] 
	I1010 11:40:16.479038   13085 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 11:40:16.479043   13085 kubeadm.go:310] 
	I1010 11:40:16.479109   13085 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8iuhps.egjej8sdpgu4s4u9 \
	I1010 11:40:16.479167   13085 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79bae548dd16cfe936348b462da3f6d7ee9037c9333f736d9e5d628396c7b6e1 
	I1010 11:40:16.479221   13085 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 11:40:16.479230   13085 cni.go:84] Creating CNI manager for ""
	I1010 11:40:16.479238   13085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:40:16.483134   13085 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 11:40:16.490197   13085 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 11:40:16.493849   13085 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 11:40:16.503893   13085 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 11:40:16.504012   13085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-704000 minikube.k8s.io/updated_at=2024_10_10T11_40_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=running-upgrade-704000 minikube.k8s.io/primary=true
	I1010 11:40:16.504050   13085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 11:40:16.507841   13085 ops.go:34] apiserver oom_adj: -16
	I1010 11:40:16.557574   13085 kubeadm.go:1113] duration metric: took 53.658042ms to wait for elevateKubeSystemPrivileges
	I1010 11:40:16.557899   13085 kubeadm.go:394] duration metric: took 4m12.556757333s to StartCluster
	I1010 11:40:16.557913   13085 settings.go:142] acquiring lock: {Name:mkc38780b398d6ae1b1dc4b65b91e70a285222f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:40:16.558092   13085 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:40:16.558519   13085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/kubeconfig: {Name:mk76f18909e94718c05e51991d3ea4660849ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:40:16.558705   13085 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:40:16.558750   13085 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 11:40:16.558788   13085 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-704000"
	I1010 11:40:16.558797   13085 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-704000"
	W1010 11:40:16.558800   13085 addons.go:243] addon storage-provisioner should already be in state true
	I1010 11:40:16.558831   13085 host.go:66] Checking if "running-upgrade-704000" exists ...
	I1010 11:40:16.558811   13085 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-704000"
	I1010 11:40:16.558854   13085 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-704000"
	I1010 11:40:16.558914   13085 config.go:182] Loaded profile config "running-upgrade-704000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:40:16.560089   13085 kapi.go:59] client config for running-upgrade-704000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/client.key", CAFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10202aa80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 11:40:16.560572   13085 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-704000"
	W1010 11:40:16.560578   13085 addons.go:243] addon default-storageclass should already be in state true
	I1010 11:40:16.560588   13085 host.go:66] Checking if "running-upgrade-704000" exists ...
	I1010 11:40:16.563135   13085 out.go:177] * Verifying Kubernetes components...
	I1010 11:40:16.563515   13085 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 11:40:16.569358   13085 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 11:40:16.569375   13085 sshutil.go:53] new ssh client: &{IP:localhost Port:53317 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/running-upgrade-704000/id_rsa Username:docker}
	I1010 11:40:16.573073   13085 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:40:16.577163   13085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:40:16.583134   13085 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 11:40:16.583146   13085 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 11:40:16.583157   13085 sshutil.go:53] new ssh client: &{IP:localhost Port:53317 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/running-upgrade-704000/id_rsa Username:docker}
	I1010 11:40:16.664903   13085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 11:40:16.670673   13085 api_server.go:52] waiting for apiserver process to appear ...
	I1010 11:40:16.670726   13085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:40:16.674804   13085 api_server.go:72] duration metric: took 116.087667ms to wait for apiserver process to appear ...
	I1010 11:40:16.674813   13085 api_server.go:88] waiting for apiserver healthz status ...
	I1010 11:40:16.674820   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:16.705430   13085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 11:40:16.718616   13085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 11:40:17.040623   13085 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1010 11:40:17.040635   13085 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1010 11:40:21.676750   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:21.676799   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:26.677256   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:26.677291   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:31.677590   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:31.677632   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:36.678102   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:36.678131   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:41.678725   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:41.678776   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:46.679820   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:46.679847   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1010 11:40:47.042715   13085 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1010 11:40:47.046962   13085 out.go:177] * Enabled addons: storage-provisioner
	I1010 11:40:47.054883   13085 addons.go:510] duration metric: took 30.496448084s for enable addons: enabled=[storage-provisioner]
	I1010 11:40:51.680901   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:51.680951   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:56.682478   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:56.682527   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:01.684340   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:01.684391   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:06.686650   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:06.686704   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:11.686962   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:11.687013   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:16.689280   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:16.689476   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:16.707000   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:16.707082   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:16.721898   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:16.721984   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:16.739101   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:16.739183   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:16.751089   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:16.751172   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:16.762998   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:16.763083   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:16.773725   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:16.773800   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:16.784048   13085 logs.go:282] 0 containers: []
	W1010 11:41:16.784059   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:16.784146   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:16.794804   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:16.794820   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:16.794826   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:16.811178   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:16.811189   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:16.834787   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:16.834799   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:16.846786   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:16.846823   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:16.881530   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:16.881543   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:16.886507   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:16.886518   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:16.926554   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:16.926571   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:16.940468   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:16.940478   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:16.952549   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:16.952560   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:16.967233   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:16.967243   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:16.979409   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:16.979421   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:16.990940   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:16.990955   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:17.008791   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:17.008802   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:19.522748   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:24.525244   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:24.525376   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:24.537970   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:24.538050   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:24.551769   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:24.551847   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:24.562731   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:24.562804   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:24.573258   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:24.573339   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:24.585853   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:24.585931   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:24.597906   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:24.597992   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:24.610097   13085 logs.go:282] 0 containers: []
	W1010 11:41:24.610107   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:24.610177   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:24.621268   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:24.621283   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:24.621289   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:24.636027   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:24.636043   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:24.649438   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:24.649449   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:24.662287   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:24.662298   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:24.680907   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:24.680918   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:24.697065   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:24.697076   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:24.708691   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:24.708701   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:24.719867   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:24.719877   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:24.743122   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:24.743128   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:24.777129   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:24.777144   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:24.783419   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:24.783427   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:24.820662   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:24.820672   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:24.835017   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:24.835027   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:27.348033   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:32.349722   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:32.349840   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:32.363231   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:32.363316   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:32.374795   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:32.374874   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:32.386214   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:32.386296   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:32.397414   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:32.397488   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:32.408724   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:32.408808   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:32.420548   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:32.420632   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:32.433950   13085 logs.go:282] 0 containers: []
	W1010 11:41:32.433964   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:32.434034   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:32.444779   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:32.444796   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:32.444802   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:32.457208   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:32.457219   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:32.462521   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:32.462530   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:32.477512   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:32.477523   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:32.492219   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:32.492230   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:32.505024   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:32.505037   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:32.519885   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:32.519897   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:32.533360   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:32.533371   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:32.553326   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:32.553339   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:32.590710   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:32.590719   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:32.629131   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:32.629141   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:32.647178   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:32.647189   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:32.667439   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:32.667449   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:35.193751   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:40.195723   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:40.195800   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:40.208018   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:40.208096   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:40.219159   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:40.219237   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:40.230212   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:40.230290   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:40.241896   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:40.241975   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:40.253991   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:40.254073   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:40.265883   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:40.265964   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:40.276818   13085 logs.go:282] 0 containers: []
	W1010 11:41:40.276829   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:40.276896   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:40.288453   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:40.288471   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:40.288477   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:40.305556   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:40.305567   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:40.319908   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:40.319919   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:40.331754   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:40.331767   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:40.347668   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:40.347678   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:40.366308   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:40.366320   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:40.391418   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:40.391431   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:40.403477   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:40.403488   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:40.439244   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:40.439258   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:40.477519   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:40.477530   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:40.492792   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:40.492804   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:40.505599   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:40.505612   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:40.530214   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:40.530225   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:43.037083   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:48.039356   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:48.039696   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:48.062277   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:48.062381   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:48.078729   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:48.078819   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:48.092574   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:48.092659   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:48.104237   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:48.104314   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:48.115623   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:48.115703   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:48.127280   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:48.127357   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:48.139534   13085 logs.go:282] 0 containers: []
	W1010 11:41:48.139542   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:48.139579   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:48.150912   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:48.150923   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:48.150928   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:48.167667   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:48.167682   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:48.180601   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:48.180613   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:48.196334   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:48.196342   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:48.209227   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:48.209239   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:48.248177   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:48.248191   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:48.253339   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:48.253347   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:48.268564   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:48.268573   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:48.291863   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:48.291874   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:48.304579   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:48.304591   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:48.323553   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:48.323566   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:48.336691   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:48.336703   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:48.363068   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:48.363081   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:50.902699   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:55.905117   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:55.905605   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:55.936465   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:55.936617   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:55.955473   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:55.955581   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:55.970030   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:55.970116   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:55.981858   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:55.981938   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:55.993719   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:55.993797   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:56.004290   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:56.004357   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:56.015535   13085 logs.go:282] 0 containers: []
	W1010 11:41:56.015569   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:56.015638   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:56.027574   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:56.027590   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:56.027596   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:56.040661   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:56.040673   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:56.053518   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:56.053532   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:56.066498   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:56.066508   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:56.078859   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:56.078869   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:56.083791   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:56.083799   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:56.099416   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:56.099432   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:56.114742   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:56.114755   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:56.132782   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:56.132793   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:56.151236   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:56.151248   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:56.177829   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:56.177847   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:56.214041   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:56.214054   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:56.251964   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:56.251975   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:58.769967   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:03.772207   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:03.772465   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:03.793930   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:03.794041   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:03.808425   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:03.808516   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:03.820567   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:42:03.820640   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:03.830984   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:03.831061   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:03.841570   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:03.841659   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:03.852463   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:03.852546   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:03.863565   13085 logs.go:282] 0 containers: []
	W1010 11:42:03.863578   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:03.863641   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:03.874134   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:03.874148   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:03.874155   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:03.885221   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:03.885232   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:03.919173   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:03.919183   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:03.923579   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:03.923586   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:03.938262   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:03.938278   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:03.951004   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:03.951016   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:03.963332   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:03.963346   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:03.976165   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:03.976177   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:04.001659   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:04.001674   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:04.039698   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:04.039710   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:04.056432   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:04.056446   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:04.079298   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:04.079306   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:04.098272   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:04.098284   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:06.623664   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:11.625877   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:11.626129   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:11.645798   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:11.645899   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:11.659946   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:11.660031   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:11.672104   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:42:11.672186   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:11.683048   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:11.683122   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:11.693972   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:11.694041   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:11.704950   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:11.705028   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:11.715304   13085 logs.go:282] 0 containers: []
	W1010 11:42:11.715317   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:11.715383   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:11.731875   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:11.731892   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:11.731898   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:11.736830   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:11.736836   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:11.774585   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:11.774596   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:11.788189   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:11.788202   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:11.809569   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:11.809580   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:11.821264   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:11.821277   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:11.845151   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:11.845161   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:11.880609   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:11.880630   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:11.896397   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:11.896406   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:11.909172   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:11.909190   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:11.921490   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:11.921502   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:11.933774   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:11.933785   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:11.952165   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:11.952175   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:14.466878   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:19.469263   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:19.469590   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:19.496055   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:19.496198   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:19.514393   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:19.514498   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:19.529532   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:42:19.529615   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:19.541748   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:19.541830   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:19.552478   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:19.552557   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:19.563137   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:19.563206   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:19.573481   13085 logs.go:282] 0 containers: []
	W1010 11:42:19.573490   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:19.573552   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:19.583894   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:19.583910   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:19.583917   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:19.595494   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:19.595504   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:19.612993   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:19.613004   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:19.629132   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:19.629141   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:19.652700   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:19.652710   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:19.688972   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:19.688986   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:19.702028   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:19.702041   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:19.716412   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:19.716423   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:19.730048   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:19.730057   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:19.741583   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:19.741597   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:19.753974   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:19.753986   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:19.789940   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:19.789954   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:19.795298   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:19.795311   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:22.313061   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:27.314395   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:27.314516   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:27.325869   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:27.325952   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:27.335791   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:27.335868   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:27.346071   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:42:27.346137   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:27.356581   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:27.356649   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:27.367589   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:27.367667   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:27.378521   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:27.378593   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:27.389215   13085 logs.go:282] 0 containers: []
	W1010 11:42:27.389229   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:27.389301   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:27.399518   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:27.399537   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:27.399543   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:27.410970   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:27.410982   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:27.429054   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:27.429065   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:27.440364   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:27.440375   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:27.463539   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:27.463548   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:27.498690   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:27.498697   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:27.512742   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:27.512756   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:27.528689   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:27.528700   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:27.546223   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:27.546234   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:27.558419   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:27.558432   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:27.563704   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:27.563711   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:27.630943   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:27.630955   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:27.647352   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:27.647367   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:30.179650   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:35.181193   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:35.181679   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:35.221573   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:35.221757   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:35.244588   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:35.244692   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:35.260354   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:42:35.260450   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:35.277489   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:35.277575   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:35.292485   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:35.292579   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:35.304853   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:35.304950   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:35.315224   13085 logs.go:282] 0 containers: []
	W1010 11:42:35.315235   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:35.315291   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:35.326202   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:35.326222   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:35.326228   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:35.345830   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:42:35.345839   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:42:35.357649   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:35.357661   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:35.373440   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:35.373451   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:35.395815   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:35.395825   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:35.400287   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:35.400293   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:35.411717   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:35.411726   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:35.423306   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:35.423316   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:35.434987   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:35.434997   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:35.470615   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:35.470625   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:35.484934   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:42:35.484944   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:42:35.498961   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:35.498971   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:35.510923   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:35.510932   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:35.524437   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:35.524449   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:35.550959   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:35.550968   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:38.090025   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:43.092433   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:43.092702   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:43.114924   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:43.115028   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:43.130618   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:43.130713   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:43.143313   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:42:43.143391   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:43.154640   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:43.154713   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:43.169864   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:43.169939   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:43.180932   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:43.181015   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:43.191209   13085 logs.go:282] 0 containers: []
	W1010 11:42:43.191219   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:43.191283   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:43.202276   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:43.202293   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:43.202299   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:43.220141   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:42:43.220151   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:42:43.232132   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:43.232142   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:43.246139   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:43.246150   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:43.261334   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:43.261347   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:43.280716   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:43.280726   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:43.314790   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:43.314804   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:43.368738   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:42:43.368750   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:42:43.380602   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:43.380614   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:43.392871   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:43.392881   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:43.397646   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:43.397652   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:43.409403   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:43.409413   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:43.431493   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:43.431506   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:43.444023   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:43.444036   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:43.469667   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:43.469678   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:45.985455   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:50.988184   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:50.988727   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:51.027864   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:51.028027   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:51.050437   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:51.050556   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:51.065264   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:42:51.065356   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:51.080719   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:51.080793   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:51.091684   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:51.091763   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:51.102307   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:51.102390   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:51.112944   13085 logs.go:282] 0 containers: []
	W1010 11:42:51.112957   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:51.113027   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:51.124383   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:51.124401   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:51.124406   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:51.129039   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:42:51.129045   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:42:51.140557   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:51.140568   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:51.156228   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:51.156241   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:51.167216   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:51.167227   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:51.192303   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:51.192311   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:51.227770   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:42:51.227779   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:42:51.240776   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:51.240787   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:51.252856   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:51.252870   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:51.268974   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:51.268985   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:51.281607   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:51.281617   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:51.293410   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:51.293419   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:51.329676   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:51.329686   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:51.344136   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:51.344147   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:51.358218   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:51.358229   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:53.880732   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:58.883090   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:58.883325   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:58.902353   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:58.902470   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:58.917211   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:58.917300   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:58.929517   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:42:58.929593   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:58.940163   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:58.940235   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:58.950390   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:58.950470   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:58.960970   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:58.961045   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:58.971424   13085 logs.go:282] 0 containers: []
	W1010 11:42:58.971441   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:58.971512   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:58.982436   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:58.982454   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:58.982461   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:59.019049   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:59.019062   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:59.058369   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:42:59.058381   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:42:59.071108   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:59.071123   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:59.083259   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:59.083275   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:59.106785   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:59.106793   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:59.121508   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:59.121521   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:59.135613   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:42:59.135626   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:42:59.147491   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:59.147501   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:59.164503   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:59.164513   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:59.175414   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:59.175427   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:59.186958   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:59.186975   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:59.191365   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:59.191372   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:59.205319   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:59.205329   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:59.216829   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:59.216839   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:01.736871   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:06.739132   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:06.739304   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:06.752456   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:06.752538   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:06.763435   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:06.763515   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:06.774213   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:06.774293   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:06.784523   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:06.784603   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:06.795149   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:06.795223   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:06.805192   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:06.805269   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:06.815955   13085 logs.go:282] 0 containers: []
	W1010 11:43:06.815965   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:06.816024   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:06.826809   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:06.826830   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:06.826836   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:06.862280   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:06.862288   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:06.877398   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:06.877408   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:06.891752   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:06.891762   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:06.903301   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:06.903312   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:06.922078   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:06.922088   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:06.933922   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:06.933933   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:06.948391   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:06.948400   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:06.960567   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:06.960577   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:06.965501   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:06.965509   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:07.001329   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:07.001341   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:07.013750   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:07.013764   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:07.025562   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:07.025572   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:07.038958   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:07.038969   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:07.056577   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:07.056591   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:09.584611   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:14.586905   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:14.587096   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:14.598962   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:14.599049   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:14.614011   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:14.614091   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:14.624571   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:14.624647   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:14.634890   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:14.634969   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:14.652015   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:14.652094   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:14.666717   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:14.666792   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:14.677266   13085 logs.go:282] 0 containers: []
	W1010 11:43:14.677277   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:14.677341   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:14.687846   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:14.687865   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:14.687886   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:14.699960   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:14.699971   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:14.711078   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:14.711089   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:14.736108   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:14.736118   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:14.750418   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:14.750430   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:14.763619   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:14.763634   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:14.775210   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:14.775224   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:14.790174   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:14.790184   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:14.801843   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:14.801854   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:14.825116   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:14.825127   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:14.837169   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:14.837179   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:14.871858   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:14.871872   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:14.876656   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:14.876664   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:14.900682   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:14.900693   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:14.912888   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:14.912902   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:17.448643   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:22.451208   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:22.451520   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:22.479308   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:22.479451   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:22.497004   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:22.497098   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:22.511117   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:22.511205   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:22.524967   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:22.525036   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:22.535608   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:22.535673   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:22.546152   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:22.546236   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:22.556371   13085 logs.go:282] 0 containers: []
	W1010 11:43:22.556384   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:22.556449   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:22.582613   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:22.582630   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:22.582637   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:22.587001   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:22.587008   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:22.601194   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:22.601205   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:22.615789   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:22.615800   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:22.627300   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:22.627311   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:22.638413   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:22.638423   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:22.672170   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:22.672177   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:22.684099   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:22.684112   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:22.709149   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:22.709157   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:22.720871   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:22.720885   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:22.733841   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:22.733855   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:22.749378   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:22.749392   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:22.761078   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:22.761092   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:22.773699   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:22.773710   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:22.791077   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:22.791091   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:25.327762   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:30.330110   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:30.330290   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:30.341264   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:30.341353   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:30.352164   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:30.352243   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:30.366656   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:30.366739   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:30.377151   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:30.377225   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:30.387851   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:30.387929   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:30.406382   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:30.406463   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:30.416627   13085 logs.go:282] 0 containers: []
	W1010 11:43:30.416642   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:30.416707   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:30.427209   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:30.427228   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:30.427235   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:30.438224   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:30.438233   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:30.450305   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:30.450317   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:30.475179   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:30.475188   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:30.487148   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:30.487159   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:30.509132   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:30.509143   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:30.514573   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:30.514582   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:30.550238   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:30.550249   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:30.565911   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:30.565921   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:30.577770   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:30.577781   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:30.589405   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:30.589417   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:30.624317   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:30.624338   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:30.639266   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:30.639277   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:30.650860   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:30.650872   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:30.672298   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:30.672314   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:33.185843   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:38.188121   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:38.188302   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:38.199997   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:38.200085   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:38.210370   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:38.210453   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:38.225733   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:38.225816   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:38.236631   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:38.236712   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:38.247187   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:38.247275   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:38.258963   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:38.259049   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:38.271399   13085 logs.go:282] 0 containers: []
	W1010 11:43:38.271410   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:38.271488   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:38.282547   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:38.282567   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:38.282574   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:38.321231   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:38.321246   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:38.337829   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:38.337843   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:38.374610   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:38.374624   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:38.386864   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:38.386878   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:38.398710   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:38.398721   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:38.423893   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:38.423900   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:38.428881   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:38.428887   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:38.440216   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:38.440232   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:38.454059   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:38.454074   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:38.469919   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:38.469938   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:38.482599   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:38.482611   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:38.498083   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:38.498094   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:38.518571   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:38.518586   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:38.533550   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:38.533562   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:41.051612   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:46.053935   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:46.054145   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:46.066102   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:46.066193   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:46.076401   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:46.076476   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:46.087035   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:46.087127   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:46.097422   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:46.097488   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:46.112845   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:46.112922   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:46.124059   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:46.124138   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:46.134302   13085 logs.go:282] 0 containers: []
	W1010 11:43:46.134319   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:46.134388   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:46.144957   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:46.144975   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:46.144981   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:46.156642   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:46.156653   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:46.179333   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:46.179341   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:46.190917   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:46.190928   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:46.195991   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:46.196000   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:46.208056   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:46.208067   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:46.223332   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:46.223343   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:46.240858   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:46.240869   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:46.275731   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:46.275741   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:46.290469   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:46.290479   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:46.302112   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:46.302122   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:46.339035   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:46.339047   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:46.353517   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:46.353528   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:46.366564   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:46.366575   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:46.378512   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:46.378522   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:48.890206   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:53.892319   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:53.892401   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:53.903958   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:53.904041   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:53.915613   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:53.915692   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:53.926624   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:53.926708   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:53.937321   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:53.937404   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:53.947779   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:53.947854   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:53.958770   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:53.958844   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:53.968778   13085 logs.go:282] 0 containers: []
	W1010 11:43:53.968790   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:53.968853   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:53.979588   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:53.979607   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:53.979613   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:53.997494   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:53.997505   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:54.009440   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:54.009451   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:54.013962   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:54.013970   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:54.048721   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:54.048736   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:54.062520   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:54.062530   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:54.076972   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:54.076983   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:54.089124   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:54.089138   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:54.112671   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:54.112681   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:54.147209   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:54.147217   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:54.158502   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:54.158512   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:54.170759   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:54.170770   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:54.187865   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:54.187876   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:54.200169   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:54.200182   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:54.214475   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:54.214486   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:56.728189   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:01.729510   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:01.729730   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:44:01.761138   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:44:01.761247   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:44:01.775623   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:44:01.775707   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:44:01.788599   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:44:01.788682   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:44:01.800796   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:44:01.800870   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:44:01.811563   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:44:01.811628   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:44:01.823256   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:44:01.823336   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:44:01.834523   13085 logs.go:282] 0 containers: []
	W1010 11:44:01.834533   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:44:01.834594   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:44:01.845738   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:44:01.845756   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:44:01.845761   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:44:01.858345   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:44:01.858359   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:44:01.870585   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:44:01.870598   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:44:01.882662   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:44:01.882673   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:44:01.894846   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:44:01.894860   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:44:01.918141   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:44:01.918149   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:44:01.930322   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:44:01.930333   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:44:01.934911   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:44:01.934920   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:44:01.949694   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:44:01.949704   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:44:01.961547   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:44:01.961558   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:44:01.979653   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:44:01.979667   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:44:02.014888   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:44:02.014896   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:44:02.034466   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:44:02.034476   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:44:02.069423   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:44:02.069434   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:44:02.081018   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:44:02.081027   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:44:04.598297   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:09.600557   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:09.600782   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:44:09.622437   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:44:09.622542   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:44:09.637941   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:44:09.638028   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:44:09.650296   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:44:09.650374   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:44:09.664638   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:44:09.664718   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:44:09.675578   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:44:09.675656   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:44:09.686076   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:44:09.686153   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:44:09.697071   13085 logs.go:282] 0 containers: []
	W1010 11:44:09.697081   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:44:09.697143   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:44:09.707771   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:44:09.707790   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:44:09.707796   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:44:09.719482   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:44:09.719495   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:44:09.731494   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:44:09.731505   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:44:09.743317   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:44:09.743328   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:44:09.777310   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:44:09.777322   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:44:09.782393   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:44:09.782407   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:44:09.798204   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:44:09.798219   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:44:09.812411   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:44:09.812421   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:44:09.824012   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:44:09.824026   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:44:09.839311   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:44:09.839325   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:44:09.862374   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:44:09.862381   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:44:09.873568   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:44:09.873581   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:44:09.909580   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:44:09.909592   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:44:09.921511   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:44:09.921526   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:44:09.933770   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:44:09.933781   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:44:12.453343   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:17.455548   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:17.457446   13085 out.go:201] 
	W1010 11:44:17.462002   13085 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1010 11:44:17.462011   13085 out.go:270] * 
	* 
	W1010 11:44:17.462754   13085 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:44:17.473934   13085 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-704000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-10 11:44:17.573001 -0700 PDT m=+1316.379990043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-704000 -n running-upgrade-704000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-704000 -n running-upgrade-704000: exit status 2 (15.780296917s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-704000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-473000          | force-systemd-flag-473000 | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-849000              | force-systemd-env-849000  | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-849000           | force-systemd-env-849000  | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT | 10 Oct 24 11:34 PDT |
	| start   | -p docker-flags-736000                | docker-flags-736000       | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-473000             | force-systemd-flag-473000 | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-473000          | force-systemd-flag-473000 | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT | 10 Oct 24 11:34 PDT |
	| start   | -p cert-expiration-986000             | cert-expiration-986000    | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-736000 ssh               | docker-flags-736000       | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-736000 ssh               | docker-flags-736000       | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-736000                | docker-flags-736000       | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT | 10 Oct 24 11:34 PDT |
	| start   | -p cert-options-371000                | cert-options-371000       | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-371000 ssh               | cert-options-371000       | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-371000 -- sudo        | cert-options-371000       | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-371000                | cert-options-371000       | jenkins | v1.34.0 | 10 Oct 24 11:34 PDT | 10 Oct 24 11:34 PDT |
	| start   | -p running-upgrade-704000             | minikube                  | jenkins | v1.26.0 | 10 Oct 24 11:34 PDT | 10 Oct 24 11:35 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-704000             | running-upgrade-704000    | jenkins | v1.34.0 | 10 Oct 24 11:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-986000             | cert-expiration-986000    | jenkins | v1.34.0 | 10 Oct 24 11:37 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-986000             | cert-expiration-986000    | jenkins | v1.34.0 | 10 Oct 24 11:37 PDT | 10 Oct 24 11:37 PDT |
	| start   | -p kubernetes-upgrade-587000          | kubernetes-upgrade-587000 | jenkins | v1.34.0 | 10 Oct 24 11:37 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-587000          | kubernetes-upgrade-587000 | jenkins | v1.34.0 | 10 Oct 24 11:37 PDT | 10 Oct 24 11:37 PDT |
	| start   | -p kubernetes-upgrade-587000          | kubernetes-upgrade-587000 | jenkins | v1.34.0 | 10 Oct 24 11:37 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-587000          | kubernetes-upgrade-587000 | jenkins | v1.34.0 | 10 Oct 24 11:38 PDT | 10 Oct 24 11:38 PDT |
	| start   | -p stopped-upgrade-616000             | minikube                  | jenkins | v1.26.0 | 10 Oct 24 11:38 PDT | 10 Oct 24 11:38 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-616000 stop           | minikube                  | jenkins | v1.26.0 | 10 Oct 24 11:38 PDT | 10 Oct 24 11:38 PDT |
	| start   | -p stopped-upgrade-616000             | stopped-upgrade-616000    | jenkins | v1.34.0 | 10 Oct 24 11:38 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 11:38:57
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 11:38:57.583243   13221 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:38:57.583401   13221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:38:57.583405   13221 out.go:358] Setting ErrFile to fd 2...
	I1010 11:38:57.583408   13221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:38:57.583547   13221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:38:57.584785   13221 out.go:352] Setting JSON to false
	I1010 11:38:57.604065   13221 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7708,"bootTime":1728577829,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:38:57.604148   13221 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:38:57.607810   13221 out.go:177] * [stopped-upgrade-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:38:57.615791   13221 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:38:57.615841   13221 notify.go:220] Checking for updates...
	I1010 11:38:57.622733   13221 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:38:57.625752   13221 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:38:57.627095   13221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:38:57.629700   13221 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:38:57.632755   13221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:38:57.636055   13221 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:38:57.639739   13221 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1010 11:38:57.642732   13221 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:38:57.646719   13221 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:38:57.653727   13221 start.go:297] selected driver: qemu2
	I1010 11:38:57.653733   13221 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53577 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1010 11:38:57.653782   13221 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:38:57.656649   13221 cni.go:84] Creating CNI manager for ""
	I1010 11:38:57.656685   13221 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:38:57.656714   13221 start.go:340] cluster config:
	{Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53577 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1010 11:38:57.656764   13221 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:38:57.660721   13221 out.go:177] * Starting "stopped-upgrade-616000" primary control-plane node in "stopped-upgrade-616000" cluster
	I1010 11:38:57.668764   13221 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1010 11:38:57.668787   13221 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1010 11:38:57.668797   13221 cache.go:56] Caching tarball of preloaded images
	I1010 11:38:57.668887   13221 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:38:57.668894   13221 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1010 11:38:57.668945   13221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/config.json ...
	I1010 11:38:57.669494   13221 start.go:360] acquireMachinesLock for stopped-upgrade-616000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:38:57.669554   13221 start.go:364] duration metric: took 51.417µs to acquireMachinesLock for "stopped-upgrade-616000"
	I1010 11:38:57.669565   13221 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:38:57.669571   13221 fix.go:54] fixHost starting: 
	I1010 11:38:57.669701   13221 fix.go:112] recreateIfNeeded on stopped-upgrade-616000: state=Stopped err=<nil>
	W1010 11:38:57.669709   13221 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:38:57.673723   13221 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-616000" ...
	I1010 11:38:57.665817   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:38:57.665969   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:38:57.680209   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:38:57.680289   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:38:57.691625   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:38:57.691700   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:38:57.702635   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:38:57.702721   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:38:57.714760   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:38:57.714872   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:38:57.726589   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:38:57.726679   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:38:57.738257   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:38:57.738342   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:38:57.755706   13085 logs.go:282] 0 containers: []
	W1010 11:38:57.755725   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:38:57.755801   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:38:57.766748   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:38:57.766767   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:38:57.766772   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:38:57.785576   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:38:57.785587   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:38:57.830143   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:38:57.830160   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:38:57.835543   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:38:57.835554   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:38:57.874339   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:38:57.874351   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:38:57.889581   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:38:57.889593   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:38:57.904219   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:38:57.904231   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:38:57.916055   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:38:57.916071   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:38:57.933457   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:38:57.933470   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:38:57.945502   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:38:57.945514   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:38:57.959915   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:38:57.959926   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:38:57.972316   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:38:57.972332   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:38:57.997342   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:38:57.997356   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:38:58.009596   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:38:58.009608   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:38:58.022283   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:38:58.022293   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:38:58.034726   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:38:58.034737   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:00.548494   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:38:57.681755   13221 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:38:57.681836   13221 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53542-:22,hostfwd=tcp::53543-:2376,hostname=stopped-upgrade-616000 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/disk.qcow2
	I1010 11:38:57.734641   13221 main.go:141] libmachine: STDOUT: 
	I1010 11:38:57.734668   13221 main.go:141] libmachine: STDERR: 
	I1010 11:38:57.734674   13221 main.go:141] libmachine: Waiting for VM to start (ssh -p 53542 docker@127.0.0.1)...
	I1010 11:39:05.550757   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:05.550870   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:05.561739   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:05.561816   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:05.572432   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:05.572526   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:05.584450   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:05.584521   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:05.596805   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:05.596898   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:05.610136   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:05.610230   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:05.620522   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:05.620609   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:05.630621   13085 logs.go:282] 0 containers: []
	W1010 11:39:05.630636   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:05.630704   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:05.641798   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:05.641816   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:05.641821   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:05.682560   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:05.682587   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:05.727869   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:05.727880   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:05.739687   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:05.739698   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:05.755960   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:05.755971   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:05.766983   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:05.766994   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:05.778746   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:05.778757   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:05.783226   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:05.783232   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:05.796604   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:05.796616   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:05.821327   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:05.821336   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:05.833729   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:05.833740   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:05.848462   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:05.848473   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:05.861628   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:05.861638   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:05.879645   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:05.879655   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:05.891136   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:05.891147   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:05.904730   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:05.904741   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:08.418302   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:13.420517   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:13.420804   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:13.447798   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:13.447954   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:13.464775   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:13.464883   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:13.478000   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:13.478084   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:13.489227   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:13.489292   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:13.499448   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:13.499520   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:13.509916   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:13.509998   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:13.519889   13085 logs.go:282] 0 containers: []
	W1010 11:39:13.519904   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:13.519960   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:13.532418   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:13.532445   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:13.532451   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:13.570789   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:13.570796   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:13.609404   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:13.609414   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:13.627389   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:13.627400   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:13.645451   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:13.645462   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:13.659486   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:13.659497   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:13.671056   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:13.671071   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:13.687520   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:13.687532   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:13.702827   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:13.702839   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:13.728138   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:13.728147   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:13.739524   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:13.739534   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:13.744001   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:13.744007   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:13.757565   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:13.757576   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:13.768513   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:13.768523   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:13.783503   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:13.783514   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:13.795213   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:13.795224   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:16.308537   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:17.990351   13221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/config.json ...
	I1010 11:39:17.991423   13221 machine.go:93] provisionDockerMachine start ...
	I1010 11:39:17.991703   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:17.992257   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:17.992279   13221 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 11:39:18.076383   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 11:39:18.076413   13221 buildroot.go:166] provisioning hostname "stopped-upgrade-616000"
	I1010 11:39:18.076530   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.076718   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.076729   13221 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-616000 && echo "stopped-upgrade-616000" | sudo tee /etc/hostname
	I1010 11:39:18.151610   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-616000
	
	I1010 11:39:18.151705   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.151860   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.151872   13221 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-616000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-616000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-616000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 11:39:18.223182   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 11:39:18.223197   13221 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19787-10623/.minikube CaCertPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19787-10623/.minikube}
	I1010 11:39:18.223211   13221 buildroot.go:174] setting up certificates
	I1010 11:39:18.223217   13221 provision.go:84] configureAuth start
	I1010 11:39:18.223224   13221 provision.go:143] copyHostCerts
	I1010 11:39:18.223294   13221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.pem, removing ...
	I1010 11:39:18.223303   13221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.pem
	I1010 11:39:18.223426   13221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.pem (1082 bytes)
	I1010 11:39:18.223659   13221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19787-10623/.minikube/cert.pem, removing ...
	I1010 11:39:18.223664   13221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19787-10623/.minikube/cert.pem
	I1010 11:39:18.223718   13221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19787-10623/.minikube/cert.pem (1123 bytes)
	I1010 11:39:18.223853   13221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19787-10623/.minikube/key.pem, removing ...
	I1010 11:39:18.223857   13221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19787-10623/.minikube/key.pem
	I1010 11:39:18.223907   13221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19787-10623/.minikube/key.pem (1675 bytes)
	I1010 11:39:18.224033   13221 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-616000 san=[127.0.0.1 localhost minikube stopped-upgrade-616000]
	I1010 11:39:18.260368   13221 provision.go:177] copyRemoteCerts
	I1010 11:39:18.260408   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 11:39:18.260415   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1010 11:39:18.294261   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 11:39:18.301595   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 11:39:18.308526   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 11:39:18.315753   13221 provision.go:87] duration metric: took 92.52575ms to configureAuth
	I1010 11:39:18.315763   13221 buildroot.go:189] setting minikube options for container-runtime
	I1010 11:39:18.315871   13221 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:39:18.315911   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.316001   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.316006   13221 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1010 11:39:18.379456   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1010 11:39:18.379466   13221 buildroot.go:70] root file system type: tmpfs
	I1010 11:39:18.379517   13221 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1010 11:39:18.379579   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.379687   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.379721   13221 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1010 11:39:18.446912   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1010 11:39:18.446976   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.447081   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.447089   13221 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1010 11:39:18.819051   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1010 11:39:18.819065   13221 machine.go:96] duration metric: took 827.631583ms to provisionDockerMachine
	I1010 11:39:18.819073   13221 start.go:293] postStartSetup for "stopped-upgrade-616000" (driver="qemu2")
	I1010 11:39:18.819079   13221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 11:39:18.819152   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 11:39:18.819162   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1010 11:39:18.853684   13221 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 11:39:18.855041   13221 info.go:137] Remote host: Buildroot 2021.02.12
	I1010 11:39:18.855048   13221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19787-10623/.minikube/addons for local assets ...
	I1010 11:39:18.855117   13221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19787-10623/.minikube/files for local assets ...
	I1010 11:39:18.855200   13221 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem -> 111352.pem in /etc/ssl/certs
	I1010 11:39:18.855297   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 11:39:18.858285   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem --> /etc/ssl/certs/111352.pem (1708 bytes)
	I1010 11:39:18.865591   13221 start.go:296] duration metric: took 46.512917ms for postStartSetup
	I1010 11:39:18.865607   13221 fix.go:56] duration metric: took 21.196246292s for fixHost
	I1010 11:39:18.865657   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.865756   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.865760   13221 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 11:39:18.927552   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728585559.360355129
	
	I1010 11:39:18.927560   13221 fix.go:216] guest clock: 1728585559.360355129
	I1010 11:39:18.927564   13221 fix.go:229] Guest: 2024-10-10 11:39:19.360355129 -0700 PDT Remote: 2024-10-10 11:39:18.865609 -0700 PDT m=+21.304767251 (delta=494.746129ms)
	I1010 11:39:18.927575   13221 fix.go:200] guest clock delta is within tolerance: 494.746129ms
	I1010 11:39:18.927579   13221 start.go:83] releasing machines lock for "stopped-upgrade-616000", held for 21.258228875s
	I1010 11:39:18.927641   13221 ssh_runner.go:195] Run: cat /version.json
	I1010 11:39:18.927649   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1010 11:39:18.927671   13221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 11:39:18.927688   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	W1010 11:39:18.928230   13221 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53542: connect: connection refused
	I1010 11:39:18.928256   13221 retry.go:31] will retry after 269.51375ms: dial tcp [::1]:53542: connect: connection refused
	W1010 11:39:19.236594   13221 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1010 11:39:19.236697   13221 ssh_runner.go:195] Run: systemctl --version
	I1010 11:39:19.239144   13221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 11:39:19.241567   13221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 11:39:19.241623   13221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1010 11:39:19.245381   13221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1010 11:39:19.251092   13221 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 11:39:19.251113   13221 start.go:495] detecting cgroup driver to use...
	I1010 11:39:19.251202   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 11:39:19.259149   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1010 11:39:19.262713   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1010 11:39:19.266277   13221 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1010 11:39:19.266312   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1010 11:39:19.269618   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1010 11:39:19.272597   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1010 11:39:19.275460   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1010 11:39:19.278784   13221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 11:39:19.282224   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1010 11:39:19.285714   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1010 11:39:19.288898   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1010 11:39:19.291767   13221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 11:39:19.294830   13221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 11:39:19.298110   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:19.381918   13221 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1010 11:39:19.388147   13221 start.go:495] detecting cgroup driver to use...
	I1010 11:39:19.388215   13221 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1010 11:39:19.393055   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 11:39:19.398212   13221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 11:39:19.407013   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 11:39:19.412399   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1010 11:39:19.416968   13221 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1010 11:39:19.474294   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1010 11:39:19.479695   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 11:39:19.485299   13221 ssh_runner.go:195] Run: which cri-dockerd
	I1010 11:39:19.486675   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1010 11:39:19.489748   13221 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1010 11:39:19.495063   13221 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1010 11:39:19.572794   13221 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1010 11:39:19.655670   13221 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1010 11:39:19.655740   13221 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1010 11:39:19.660886   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:19.737781   13221 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1010 11:39:20.884174   13221 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.146385542s)
	I1010 11:39:20.884261   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1010 11:39:20.888887   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1010 11:39:20.893099   13221 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1010 11:39:20.976569   13221 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1010 11:39:21.063632   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:21.143682   13221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1010 11:39:21.149548   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1010 11:39:21.153917   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:21.242814   13221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1010 11:39:21.280991   13221 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1010 11:39:21.281095   13221 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1010 11:39:21.283047   13221 start.go:563] Will wait 60s for crictl version
	I1010 11:39:21.283110   13221 ssh_runner.go:195] Run: which crictl
	I1010 11:39:21.284828   13221 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 11:39:21.300111   13221 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1010 11:39:21.300208   13221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1010 11:39:21.317765   13221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1010 11:39:21.310945   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:21.311054   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:21.323203   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:21.323288   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:21.334697   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:21.334775   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:21.345911   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:21.345976   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:21.356689   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:21.356755   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:21.368948   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:21.369024   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:21.381020   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:21.381103   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:21.402578   13085 logs.go:282] 0 containers: []
	W1010 11:39:21.402592   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:21.402662   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:21.415566   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:21.415595   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:21.415601   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:21.456905   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:21.456922   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:21.469751   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:21.469765   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:21.482284   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:21.482296   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:21.496976   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:21.496989   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:21.510268   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:21.510281   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:21.532590   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:21.532604   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:21.545787   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:21.545804   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:21.564968   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:21.564978   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:21.581236   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:21.581249   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:21.603212   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:21.603226   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:21.616377   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:21.616392   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:21.621370   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:21.621383   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:21.662230   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:21.662244   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:21.677392   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:21.677405   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:21.704289   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:21.704303   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:21.338048   13221 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1010 11:39:21.338144   13221 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1010 11:39:21.339996   13221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 11:39:21.343693   13221 kubeadm.go:883] updating cluster {Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53577 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1010 11:39:21.343766   13221 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1010 11:39:21.343830   13221 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1010 11:39:21.355415   13221 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1010 11:39:21.355427   13221 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1010 11:39:21.355490   13221 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1010 11:39:21.359154   13221 ssh_runner.go:195] Run: which lz4
	I1010 11:39:21.360658   13221 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 11:39:21.361856   13221 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 11:39:21.361870   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1010 11:39:22.330981   13221 docker.go:649] duration metric: took 970.383125ms to copy over tarball
	I1010 11:39:22.331049   13221 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 11:39:24.219659   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:23.518040   13221 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.18698675s)
	I1010 11:39:23.518057   13221 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 11:39:23.534360   13221 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1010 11:39:23.538154   13221 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1010 11:39:23.543735   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:23.621788   13221 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1010 11:39:25.352003   13221 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.730214834s)
	I1010 11:39:25.352121   13221 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1010 11:39:25.363622   13221 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1010 11:39:25.363630   13221 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1010 11:39:25.363636   13221 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 11:39:25.370211   13221 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:39:25.371530   13221 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:39:25.373578   13221 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:39:25.373884   13221 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:39:25.375699   13221 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:39:25.375727   13221 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:39:25.376995   13221 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:39:25.377098   13221 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:39:25.378334   13221 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:39:25.378835   13221 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:39:25.379622   13221 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1010 11:39:25.379903   13221 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:39:25.380712   13221 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:39:25.381028   13221 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:39:25.381498   13221 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1010 11:39:25.382588   13221 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:39:26.028122   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:39:26.039899   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:39:26.040984   13221 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1010 11:39:26.041010   13221 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:39:26.041043   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:39:26.051911   13221 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1010 11:39:26.051934   13221 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:39:26.052040   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:39:26.052821   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:39:26.065723   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1010 11:39:26.067050   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1010 11:39:26.068859   13221 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1010 11:39:26.068875   13221 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:39:26.068931   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:39:26.079410   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1010 11:39:26.093529   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1010 11:39:26.103337   13221 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1010 11:39:26.103364   13221 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:39:26.103420   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1010 11:39:26.113312   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1010 11:39:26.174449   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:39:26.186820   13221 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1010 11:39:26.186840   13221 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:39:26.186915   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:39:26.197566   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1010 11:39:26.211253   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1010 11:39:26.221903   13221 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1010 11:39:26.221923   13221 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1010 11:39:26.222003   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1010 11:39:26.236246   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1010 11:39:26.236400   13221 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1010 11:39:26.237970   13221 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1010 11:39:26.237978   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1010 11:39:26.246039   13221 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1010 11:39:26.246048   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1010 11:39:26.250279   13221 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1010 11:39:26.250419   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:39:26.284472   13221 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1010 11:39:26.284515   13221 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1010 11:39:26.284532   13221 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:39:26.284625   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:39:26.296678   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1010 11:39:26.296816   13221 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1010 11:39:26.298270   13221 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1010 11:39:26.298280   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1010 11:39:26.339098   13221 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1010 11:39:26.339134   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1010 11:39:26.377732   13221 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1010 11:39:26.455416   13221 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1010 11:39:26.455567   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:39:26.468908   13221 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1010 11:39:26.468939   13221 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:39:26.469005   13221 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:39:26.483816   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 11:39:26.483951   13221 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 11:39:26.485250   13221 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1010 11:39:26.485263   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1010 11:39:26.514868   13221 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 11:39:26.514885   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1010 11:39:26.754270   13221 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 11:39:26.754316   13221 cache_images.go:92] duration metric: took 1.39068725s to LoadCachedImages
	W1010 11:39:26.754369   13221 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1010 11:39:26.754376   13221 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1010 11:39:26.754434   13221 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-616000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 11:39:26.754511   13221 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1010 11:39:26.768032   13221 cni.go:84] Creating CNI manager for ""
	I1010 11:39:26.768043   13221 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:39:26.768048   13221 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 11:39:26.768057   13221 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-616000 NodeName:stopped-upgrade-616000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 11:39:26.768127   13221 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-616000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 11:39:26.768197   13221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1010 11:39:26.771037   13221 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 11:39:26.771074   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 11:39:26.774159   13221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1010 11:39:26.779226   13221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 11:39:26.784276   13221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1010 11:39:26.789445   13221 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1010 11:39:26.790590   13221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 11:39:26.794399   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:26.874098   13221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 11:39:26.880185   13221 certs.go:68] Setting up /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000 for IP: 10.0.2.15
	I1010 11:39:26.880195   13221 certs.go:194] generating shared ca certs ...
	I1010 11:39:26.880205   13221 certs.go:226] acquiring lock for ca certs: {Name:mk609fb55a881bb4c70c8ff17f366ce3ffd355cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:39:26.880372   13221 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.key
	I1010 11:39:26.880638   13221 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/proxy-client-ca.key
	I1010 11:39:26.880649   13221 certs.go:256] generating profile certs ...
	I1010 11:39:26.880879   13221 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/client.key
	I1010 11:39:26.880899   13221 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213
	I1010 11:39:26.880911   13221 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1010 11:39:26.982871   13221 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 ...
	I1010 11:39:26.982885   13221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213: {Name:mke4d2cca97cd85a4f67bb0f1cfbfeabfb6c5007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:39:26.983174   13221 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213 ...
	I1010 11:39:26.983179   13221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213: {Name:mk871611112a3a344c03cb5c05e3edc8ede37b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:39:26.983328   13221 certs.go:381] copying /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt
	I1010 11:39:26.983442   13221 certs.go:385] copying /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key
	I1010 11:39:26.983785   13221 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/proxy-client.key
	I1010 11:39:26.983928   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/11135.pem (1338 bytes)
	W1010 11:39:26.984111   13221 certs.go:480] ignoring /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/11135_empty.pem, impossibly tiny 0 bytes
	I1010 11:39:26.984119   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 11:39:26.984148   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem (1082 bytes)
	I1010 11:39:26.984169   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem (1123 bytes)
	I1010 11:39:26.984189   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/key.pem (1675 bytes)
	I1010 11:39:26.984243   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem (1708 bytes)
	I1010 11:39:26.984610   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 11:39:26.991383   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1010 11:39:26.998850   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 11:39:27.005720   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 11:39:27.012613   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 11:39:27.019395   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 11:39:27.026770   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 11:39:27.034458   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 11:39:27.042144   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 11:39:27.049534   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/11135.pem --> /usr/share/ca-certificates/11135.pem (1338 bytes)
	I1010 11:39:27.056750   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem --> /usr/share/ca-certificates/111352.pem (1708 bytes)
	I1010 11:39:27.063762   13221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 11:39:27.068745   13221 ssh_runner.go:195] Run: openssl version
	I1010 11:39:27.070654   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 11:39:27.074105   13221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 11:39:27.075680   13221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 18:35 /usr/share/ca-certificates/minikubeCA.pem
	I1010 11:39:27.075713   13221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 11:39:27.077504   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 11:39:27.080283   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11135.pem && ln -fs /usr/share/ca-certificates/11135.pem /etc/ssl/certs/11135.pem"
	I1010 11:39:27.083139   13221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11135.pem
	I1010 11:39:27.084505   13221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:23 /usr/share/ca-certificates/11135.pem
	I1010 11:39:27.084532   13221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11135.pem
	I1010 11:39:27.086241   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11135.pem /etc/ssl/certs/51391683.0"
	I1010 11:39:27.089650   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111352.pem && ln -fs /usr/share/ca-certificates/111352.pem /etc/ssl/certs/111352.pem"
	I1010 11:39:27.092631   13221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111352.pem
	I1010 11:39:27.093991   13221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:23 /usr/share/ca-certificates/111352.pem
	I1010 11:39:27.094025   13221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111352.pem
	I1010 11:39:27.095805   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111352.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 11:39:27.098955   13221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 11:39:27.100778   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 11:39:27.103540   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 11:39:27.105643   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 11:39:27.107933   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 11:39:27.109723   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 11:39:27.111514   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 11:39:27.113321   13221 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53577 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1010 11:39:27.113395   13221 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1010 11:39:27.123272   13221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 11:39:27.127038   13221 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 11:39:27.127043   13221 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 11:39:27.127078   13221 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 11:39:27.130192   13221 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 11:39:27.130484   13221 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-616000" does not appear in /Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:39:27.130575   13221 kubeconfig.go:62] /Users/jenkins/minikube-integration/19787-10623/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-616000" cluster setting kubeconfig missing "stopped-upgrade-616000" context setting]
	I1010 11:39:27.130776   13221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/kubeconfig: {Name:mk76f18909e94718c05e51991d3ea4660849ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:39:27.131190   13221 kapi.go:59] client config for stopped-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102322a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 11:39:27.131681   13221 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 11:39:27.134479   13221 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-616000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1010 11:39:27.134487   13221 kubeadm.go:1160] stopping kube-system containers ...
	I1010 11:39:27.134536   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1010 11:39:27.144944   13221 docker.go:483] Stopping containers: [c10b1f623e8e 295994b875c3 8e4c05f7b12f d0634f9bbbf3 33e7c52c5d74 14ff5da1faec 92c530ce8e31 d7741e6115dd]
	I1010 11:39:27.145017   13221 ssh_runner.go:195] Run: docker stop c10b1f623e8e 295994b875c3 8e4c05f7b12f d0634f9bbbf3 33e7c52c5d74 14ff5da1faec 92c530ce8e31 d7741e6115dd
	I1010 11:39:27.155808   13221 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 11:39:27.161426   13221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 11:39:27.164185   13221 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 11:39:27.164191   13221 kubeadm.go:157] found existing configuration files:
	
	I1010 11:39:27.164222   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/admin.conf
	I1010 11:39:27.166763   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 11:39:27.166790   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 11:39:27.169851   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/kubelet.conf
	I1010 11:39:27.172449   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 11:39:27.172484   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 11:39:27.174930   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/controller-manager.conf
	I1010 11:39:27.177856   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 11:39:27.177883   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 11:39:27.180723   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/scheduler.conf
	I1010 11:39:27.183149   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 11:39:27.183178   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 11:39:27.186238   13221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 11:39:27.189187   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:39:27.210999   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:39:27.550230   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:39:29.222202   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:29.222328   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:29.233803   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:29.233907   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:29.246720   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:29.246824   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:29.263337   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:29.263428   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:29.274054   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:29.274150   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:29.284833   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:29.284908   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:29.296300   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:29.296379   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:29.306982   13085 logs.go:282] 0 containers: []
	W1010 11:39:29.306994   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:29.307061   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:29.318997   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:29.319018   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:29.319023   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:29.323862   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:29.323869   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:29.337808   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:29.337820   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:29.373372   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:29.373384   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:29.385894   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:29.385906   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:29.401753   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:29.401765   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:29.413806   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:29.413823   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:29.431914   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:29.431924   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:29.443695   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:29.443707   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:29.455039   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:29.455049   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:29.468686   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:29.468695   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:29.482475   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:29.482484   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:29.493913   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:29.493925   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:29.533578   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:29.533587   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:29.544850   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:29.544859   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:29.568129   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:29.568138   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:32.094860   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:27.680630   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:39:27.711153   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:39:27.744036   13221 api_server.go:52] waiting for apiserver process to appear ...
	I1010 11:39:27.744125   13221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:39:28.244956   13221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:39:28.746215   13221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:39:28.751713   13221 api_server.go:72] duration metric: took 1.007687875s to wait for apiserver process to appear ...
	I1010 11:39:28.751725   13221 api_server.go:88] waiting for apiserver healthz status ...
	I1010 11:39:28.751745   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:37.097181   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:37.097522   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:37.139183   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:37.139358   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:37.166793   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:37.166900   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:37.179911   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:37.180000   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:37.190774   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:37.190868   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:37.205422   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:37.205501   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:37.217356   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:37.217430   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:37.228881   13085 logs.go:282] 0 containers: []
	W1010 11:39:37.228894   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:37.228966   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:37.239245   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:37.239263   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:37.239268   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:37.243875   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:37.243880   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:37.255536   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:37.255548   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:37.266500   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:37.266511   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:37.279788   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:37.279799   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:37.294053   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:37.294064   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:37.314490   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:37.314500   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:37.331359   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:37.331370   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:37.343534   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:37.343544   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:37.384211   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:37.384221   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:33.753762   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:33.753787   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:37.423926   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:37.423938   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:37.435642   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:37.435656   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:37.457413   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:37.457424   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:37.471072   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:37.471083   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:37.482354   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:37.482366   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:37.493315   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:37.493326   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:40.019862   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:38.753970   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:38.754009   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:45.022095   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:45.022386   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:45.051897   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:45.052061   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:45.069890   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:45.069993   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:45.083694   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:45.083789   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:45.095995   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:45.096072   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:45.106932   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:45.107004   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:45.117576   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:45.117679   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:45.127688   13085 logs.go:282] 0 containers: []
	W1010 11:39:45.127701   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:45.127765   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:45.143503   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:45.143520   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:45.143526   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:45.180166   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:45.180177   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:45.197235   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:45.197249   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:45.210985   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:45.210999   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:45.228689   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:45.228700   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:45.240272   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:45.240283   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:45.266475   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:45.266489   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:45.282112   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:45.282124   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:45.296410   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:45.296422   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:45.309838   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:45.309850   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:45.323694   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:45.323709   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:45.337922   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:45.337933   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:45.352986   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:45.353003   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:45.366354   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:45.366366   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:45.404949   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:45.404963   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:45.410125   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:45.410135   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:43.754266   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:43.754289   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:47.924347   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:48.754678   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:48.754722   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:52.926590   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:52.926896   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:39:52.953438   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:39:52.953577   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:39:52.971212   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:39:52.971306   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:39:52.988884   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:39:52.988965   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:39:52.999516   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:39:52.999611   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:39:53.009766   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:39:53.009859   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:39:53.020611   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:39:53.020702   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:39:53.031073   13085 logs.go:282] 0 containers: []
	W1010 11:39:53.031088   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:39:53.031161   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:39:53.045357   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:39:53.045378   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:39:53.045383   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:39:53.057993   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:39:53.058005   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:39:53.076616   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:39:53.076627   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:39:53.088936   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:39:53.088948   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:39:53.100601   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:39:53.100612   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:39:53.113949   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:39:53.113958   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:39:53.125046   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:39:53.125058   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:53.142071   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:39:53.142082   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:39:53.175848   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:39:53.175858   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:39:53.190300   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:39:53.190311   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:39:53.205300   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:39:53.205311   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:39:53.219296   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:39:53.219309   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:39:53.231722   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:39:53.231734   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:39:53.274544   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:39:53.274555   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:39:53.278633   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:39:53.278639   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:39:53.289475   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:39:53.289486   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:39:55.815825   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:53.755321   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:53.755340   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:00.817982   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:00.818117   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:40:00.832080   13085 logs.go:282] 2 containers: [daf65d9cdf96 fe7dc23baec1]
	I1010 11:40:00.832168   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:40:00.843325   13085 logs.go:282] 2 containers: [5e862a657369 e72a4aca3378]
	I1010 11:40:00.843401   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:40:00.859393   13085 logs.go:282] 1 containers: [8d08ce77907a]
	I1010 11:40:00.859459   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:40:00.870556   13085 logs.go:282] 2 containers: [0bfad8049f4e c69c376a22ae]
	I1010 11:40:00.870642   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:40:00.881032   13085 logs.go:282] 1 containers: [250e31232fca]
	I1010 11:40:00.881115   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:40:00.892425   13085 logs.go:282] 2 containers: [497d7b1fa405 93056d740505]
	I1010 11:40:00.892514   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:40:00.903104   13085 logs.go:282] 0 containers: []
	W1010 11:40:00.903119   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:40:00.903202   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:40:00.915573   13085 logs.go:282] 1 containers: [5a407cac17ad]
	I1010 11:40:00.915592   13085 logs.go:123] Gathering logs for kube-apiserver [fe7dc23baec1] ...
	I1010 11:40:00.915598   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe7dc23baec1"
	I1010 11:40:00.927847   13085 logs.go:123] Gathering logs for kube-proxy [250e31232fca] ...
	I1010 11:40:00.927857   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250e31232fca"
	I1010 11:40:00.939284   13085 logs.go:123] Gathering logs for kube-controller-manager [93056d740505] ...
	I1010 11:40:00.939294   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93056d740505"
	I1010 11:40:00.950346   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:40:00.950360   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:40:00.990734   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:40:00.990744   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:40:01.027494   13085 logs.go:123] Gathering logs for kube-apiserver [daf65d9cdf96] ...
	I1010 11:40:01.027505   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daf65d9cdf96"
	I1010 11:40:01.042723   13085 logs.go:123] Gathering logs for coredns [8d08ce77907a] ...
	I1010 11:40:01.042738   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d08ce77907a"
	I1010 11:40:01.054421   13085 logs.go:123] Gathering logs for kube-scheduler [c69c376a22ae] ...
	I1010 11:40:01.054431   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69c376a22ae"
	I1010 11:40:01.066237   13085 logs.go:123] Gathering logs for storage-provisioner [5a407cac17ad] ...
	I1010 11:40:01.066251   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a407cac17ad"
	I1010 11:40:01.084905   13085 logs.go:123] Gathering logs for kube-controller-manager [497d7b1fa405] ...
	I1010 11:40:01.084915   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497d7b1fa405"
	I1010 11:40:01.110196   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:40:01.110208   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:40:01.134160   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:40:01.134171   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:40:01.147772   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:40:01.147783   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:40:01.152242   13085 logs.go:123] Gathering logs for etcd [5e862a657369] ...
	I1010 11:40:01.152249   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e862a657369"
	I1010 11:40:01.166384   13085 logs.go:123] Gathering logs for etcd [e72a4aca3378] ...
	I1010 11:40:01.166396   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e72a4aca3378"
	I1010 11:40:01.179492   13085 logs.go:123] Gathering logs for kube-scheduler [0bfad8049f4e] ...
	I1010 11:40:01.179503   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bfad8049f4e"
	I1010 11:39:58.756040   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:58.756133   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:03.705639   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:03.757266   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:03.757293   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:08.707949   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:08.708153   13085 kubeadm.go:597] duration metric: took 4m4.668867291s to restartPrimaryControlPlane
	W1010 11:40:08.708317   13085 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 11:40:08.708385   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1010 11:40:09.695146   13085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 11:40:09.700023   13085 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 11:40:09.702819   13085 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 11:40:09.705443   13085 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 11:40:09.705449   13085 kubeadm.go:157] found existing configuration files:
	
	I1010 11:40:09.705481   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/admin.conf
	I1010 11:40:09.708350   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 11:40:09.708380   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 11:40:09.710860   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/kubelet.conf
	I1010 11:40:09.713342   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 11:40:09.713390   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 11:40:09.716551   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/controller-manager.conf
	I1010 11:40:09.719146   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 11:40:09.719189   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 11:40:09.721605   13085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/scheduler.conf
	I1010 11:40:09.724484   13085 kubeadm.go:163] "https://control-plane.minikube.internal:53349" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53349 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 11:40:09.724519   13085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 11:40:09.727255   13085 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 11:40:09.743767   13085 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1010 11:40:09.743797   13085 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 11:40:09.791762   13085 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 11:40:09.791828   13085 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 11:40:09.791888   13085 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 11:40:09.841059   13085 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 11:40:09.846321   13085 out.go:235]   - Generating certificates and keys ...
	I1010 11:40:09.846355   13085 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 11:40:09.846389   13085 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 11:40:09.846427   13085 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 11:40:09.846455   13085 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 11:40:09.846489   13085 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 11:40:09.846515   13085 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 11:40:09.846544   13085 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 11:40:09.846573   13085 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 11:40:09.846608   13085 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 11:40:09.846654   13085 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 11:40:09.846671   13085 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 11:40:09.846697   13085 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 11:40:09.963400   13085 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 11:40:10.198887   13085 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 11:40:10.297357   13085 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 11:40:10.440510   13085 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 11:40:10.467812   13085 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 11:40:10.468197   13085 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 11:40:10.468272   13085 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 11:40:10.546868   13085 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 11:40:10.551580   13085 out.go:235]   - Booting up control plane ...
	I1010 11:40:10.551629   13085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 11:40:10.551674   13085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 11:40:10.551728   13085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 11:40:10.551813   13085 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 11:40:10.551929   13085 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 11:40:08.757712   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:08.757752   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:15.055919   13085 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503902 seconds
	I1010 11:40:15.056013   13085 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 11:40:15.061582   13085 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 11:40:15.569301   13085 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 11:40:15.569410   13085 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-704000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 11:40:16.073143   13085 kubeadm.go:310] [bootstrap-token] Using token: 8iuhps.egjej8sdpgu4s4u9
	I1010 11:40:16.076757   13085 out.go:235]   - Configuring RBAC rules ...
	I1010 11:40:16.076810   13085 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 11:40:16.076857   13085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 11:40:16.080344   13085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 11:40:16.081153   13085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 11:40:16.082073   13085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 11:40:16.082957   13085 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 11:40:16.086412   13085 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 11:40:16.267960   13085 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 11:40:16.477472   13085 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 11:40:16.477926   13085 kubeadm.go:310] 
	I1010 11:40:16.477958   13085 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 11:40:16.477963   13085 kubeadm.go:310] 
	I1010 11:40:16.478001   13085 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 11:40:16.478051   13085 kubeadm.go:310] 
	I1010 11:40:16.478126   13085 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 11:40:16.478240   13085 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 11:40:16.478271   13085 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 11:40:16.478274   13085 kubeadm.go:310] 
	I1010 11:40:16.478351   13085 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 11:40:16.478355   13085 kubeadm.go:310] 
	I1010 11:40:16.478377   13085 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 11:40:16.478379   13085 kubeadm.go:310] 
	I1010 11:40:16.478404   13085 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 11:40:16.478536   13085 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 11:40:16.478641   13085 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 11:40:16.478649   13085 kubeadm.go:310] 
	I1010 11:40:16.478745   13085 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 11:40:16.478781   13085 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 11:40:16.478783   13085 kubeadm.go:310] 
	I1010 11:40:16.478891   13085 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8iuhps.egjej8sdpgu4s4u9 \
	I1010 11:40:16.478940   13085 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79bae548dd16cfe936348b462da3f6d7ee9037c9333f736d9e5d628396c7b6e1 \
	I1010 11:40:16.478953   13085 kubeadm.go:310] 	--control-plane 
	I1010 11:40:16.478956   13085 kubeadm.go:310] 
	I1010 11:40:16.479038   13085 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 11:40:16.479043   13085 kubeadm.go:310] 
	I1010 11:40:16.479109   13085 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8iuhps.egjej8sdpgu4s4u9 \
	I1010 11:40:16.479167   13085 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79bae548dd16cfe936348b462da3f6d7ee9037c9333f736d9e5d628396c7b6e1 
	I1010 11:40:16.479221   13085 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 11:40:16.479230   13085 cni.go:84] Creating CNI manager for ""
	I1010 11:40:16.479238   13085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:40:16.483134   13085 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 11:40:16.490197   13085 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 11:40:16.493849   13085 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 11:40:16.503893   13085 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 11:40:16.504012   13085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-704000 minikube.k8s.io/updated_at=2024_10_10T11_40_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=running-upgrade-704000 minikube.k8s.io/primary=true
	I1010 11:40:16.504050   13085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 11:40:16.507841   13085 ops.go:34] apiserver oom_adj: -16
	I1010 11:40:16.557574   13085 kubeadm.go:1113] duration metric: took 53.658042ms to wait for elevateKubeSystemPrivileges
	I1010 11:40:16.557899   13085 kubeadm.go:394] duration metric: took 4m12.556757333s to StartCluster
	I1010 11:40:16.557913   13085 settings.go:142] acquiring lock: {Name:mkc38780b398d6ae1b1dc4b65b91e70a285222f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:40:16.558092   13085 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:40:16.558519   13085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/kubeconfig: {Name:mk76f18909e94718c05e51991d3ea4660849ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:40:16.558705   13085 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:40:16.558750   13085 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 11:40:16.558788   13085 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-704000"
	I1010 11:40:16.558797   13085 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-704000"
	W1010 11:40:16.558800   13085 addons.go:243] addon storage-provisioner should already be in state true
	I1010 11:40:16.558831   13085 host.go:66] Checking if "running-upgrade-704000" exists ...
	I1010 11:40:16.558811   13085 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-704000"
	I1010 11:40:16.558854   13085 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-704000"
	I1010 11:40:16.558914   13085 config.go:182] Loaded profile config "running-upgrade-704000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:40:16.560089   13085 kapi.go:59] client config for running-upgrade-704000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/running-upgrade-704000/client.key", CAFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10202aa80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 11:40:16.560572   13085 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-704000"
	W1010 11:40:16.560578   13085 addons.go:243] addon default-storageclass should already be in state true
	I1010 11:40:16.560588   13085 host.go:66] Checking if "running-upgrade-704000" exists ...
	I1010 11:40:16.563135   13085 out.go:177] * Verifying Kubernetes components...
	I1010 11:40:16.563515   13085 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 11:40:16.569358   13085 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 11:40:16.569375   13085 sshutil.go:53] new ssh client: &{IP:localhost Port:53317 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/running-upgrade-704000/id_rsa Username:docker}
	I1010 11:40:16.573073   13085 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:40:16.577163   13085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:40:16.583134   13085 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 11:40:16.583146   13085 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 11:40:16.583157   13085 sshutil.go:53] new ssh client: &{IP:localhost Port:53317 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/running-upgrade-704000/id_rsa Username:docker}
	I1010 11:40:16.664903   13085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 11:40:16.670673   13085 api_server.go:52] waiting for apiserver process to appear ...
	I1010 11:40:16.670726   13085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:40:16.674804   13085 api_server.go:72] duration metric: took 116.087667ms to wait for apiserver process to appear ...
	I1010 11:40:16.674813   13085 api_server.go:88] waiting for apiserver healthz status ...
	I1010 11:40:16.674820   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:16.705430   13085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 11:40:16.718616   13085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 11:40:17.040623   13085 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1010 11:40:17.040635   13085 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1010 11:40:13.759105   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:13.759135   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:21.676750   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:21.676799   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:18.760778   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:18.760823   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:26.677256   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:26.677291   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:23.763089   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:23.763122   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:31.677590   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:31.677632   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:28.765410   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:28.765612   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:40:28.777714   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:40:28.777796   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:40:28.790764   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:40:28.790838   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:40:28.801729   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:40:28.801809   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:40:28.812634   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:40:28.812727   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:40:28.823259   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:40:28.823332   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:40:28.833390   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:40:28.833470   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:40:28.844025   13221 logs.go:282] 0 containers: []
	W1010 11:40:28.844037   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:40:28.844114   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:40:28.855668   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:40:28.855688   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:40:28.855701   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:40:28.871277   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:40:28.871289   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:40:28.882928   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:40:28.882939   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:40:28.894679   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:40:28.894689   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:40:28.919405   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:40:28.919422   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:40:28.933214   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:40:28.933224   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:40:28.947588   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:40:28.947599   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:40:28.959711   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:40:28.959722   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:40:29.067960   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:40:29.067974   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:40:29.082875   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:40:29.082887   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:40:29.124517   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:40:29.124530   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:40:29.141826   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:40:29.141836   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:40:29.160686   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:40:29.160707   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:40:29.199677   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:40:29.199691   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:40:29.204267   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:40:29.204274   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:40:29.215463   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:40:29.215472   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:40:29.226993   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:40:29.227004   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:40:31.743905   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:36.678102   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:36.678131   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:36.746079   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:36.746229   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:40:36.757788   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:40:36.757879   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:40:36.768623   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:40:36.768717   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:40:36.780232   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:40:36.780304   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:40:36.790788   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:40:36.790868   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:40:36.801070   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:40:36.801146   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:40:36.811327   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:40:36.811401   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:40:36.821250   13221 logs.go:282] 0 containers: []
	W1010 11:40:36.821263   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:40:36.821333   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:40:36.832193   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:40:36.832212   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:40:36.832217   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:40:36.843879   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:40:36.843890   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:40:36.869133   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:40:36.869141   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:40:36.881412   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:40:36.881423   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:40:36.895182   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:40:36.895193   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:40:36.907016   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:40:36.907029   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:40:36.920365   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:40:36.920377   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:40:36.955840   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:40:36.955851   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:40:36.969871   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:40:36.969886   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:40:36.983744   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:40:36.983758   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:40:37.001228   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:40:37.001237   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:40:37.040734   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:40:37.040744   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:40:37.044935   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:40:37.044943   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:40:37.056346   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:40:37.056361   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:40:37.067823   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:40:37.067842   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:40:37.106068   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:40:37.106078   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:40:37.117564   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:40:37.117574   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:40:41.678725   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:41.678776   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:39.634707   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:46.679820   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:46.679847   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1010 11:40:47.042715   13085 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1010 11:40:47.046962   13085 out.go:177] * Enabled addons: storage-provisioner
	I1010 11:40:47.054883   13085 addons.go:510] duration metric: took 30.496448084s for enable addons: enabled=[storage-provisioner]
	I1010 11:40:44.637065   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:44.637314   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:40:44.660548   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:40:44.660643   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:40:44.675894   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:40:44.675989   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:40:44.688246   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:40:44.688333   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:40:44.702887   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:40:44.702970   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:40:44.713551   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:40:44.713627   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:40:44.723995   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:40:44.724070   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:40:44.734449   13221 logs.go:282] 0 containers: []
	W1010 11:40:44.734461   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:40:44.734526   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:40:44.744945   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:40:44.744959   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:40:44.744964   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:40:44.785162   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:40:44.785174   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:40:44.802261   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:40:44.802274   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:40:44.816889   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:40:44.816898   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:40:44.831274   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:40:44.831287   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:40:44.856653   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:40:44.856664   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:40:44.894883   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:40:44.894895   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:40:44.909870   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:40:44.909884   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:40:44.921497   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:40:44.921508   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:40:44.932923   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:40:44.932933   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:40:44.969373   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:40:44.969384   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:40:44.981797   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:40:44.981810   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:40:44.986515   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:40:44.986520   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:40:44.998617   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:40:44.998628   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:40:45.014197   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:40:45.014211   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:40:45.031926   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:40:45.031935   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:40:45.044101   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:40:45.044115   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:40:47.556094   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:51.680901   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:51.680951   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:52.558316   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:52.558515   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:40:52.569611   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:40:52.569683   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:40:52.580404   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:40:52.580484   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:40:56.682478   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:56.682527   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:52.590731   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:40:52.590798   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:40:52.601394   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:40:52.601468   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:40:52.612213   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:40:52.612291   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:40:52.628281   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:40:52.628353   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:40:52.638546   13221 logs.go:282] 0 containers: []
	W1010 11:40:52.638560   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:40:52.638624   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:40:52.649203   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:40:52.649223   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:40:52.649228   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:40:52.665054   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:40:52.665068   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:40:52.676373   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:40:52.676384   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:40:52.715777   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:40:52.715787   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:40:52.728865   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:40:52.728877   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:40:52.747837   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:40:52.747846   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:40:52.759543   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:40:52.759552   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:40:52.783436   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:40:52.783449   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:40:52.787587   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:40:52.787593   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:40:52.801481   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:40:52.801490   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:40:52.815959   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:40:52.815970   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:40:52.841335   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:40:52.841344   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:40:52.855382   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:40:52.855395   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:40:52.893819   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:40:52.893829   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:40:52.908235   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:40:52.908245   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:40:52.919606   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:40:52.919618   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:40:52.931491   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:40:52.931500   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:40:55.470970   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:01.684340   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:01.684391   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:00.473469   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:00.473797   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:00.498296   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:00.498427   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:00.514549   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:00.514674   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:00.528382   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:00.528462   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:00.539798   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:00.539882   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:00.550776   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:00.550855   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:00.561270   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:00.561345   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:00.571494   13221 logs.go:282] 0 containers: []
	W1010 11:41:00.571504   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:00.571569   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:00.587987   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:00.588007   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:00.588012   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:00.599686   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:00.599695   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:00.611615   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:00.611629   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:00.623136   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:00.623146   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:00.638889   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:00.638904   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:00.674778   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:00.674788   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:00.697642   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:00.697652   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:00.708408   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:00.708418   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:00.722162   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:00.722175   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:00.740121   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:00.740130   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:00.766491   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:00.766507   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:00.770901   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:00.770906   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:00.808188   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:00.808203   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:00.822182   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:00.822191   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:00.837044   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:00.837058   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:00.852319   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:00.852329   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:00.864086   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:00.864096   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:06.686650   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:06.686704   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:03.403592   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:11.686962   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:11.687013   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:08.405864   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:08.406050   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:08.419168   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:08.419256   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:08.430852   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:08.430929   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:08.441386   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:08.441471   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:08.452183   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:08.452268   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:08.462437   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:08.462515   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:08.472826   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:08.472909   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:08.483537   13221 logs.go:282] 0 containers: []
	W1010 11:41:08.483550   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:08.483618   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:08.494117   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:08.494134   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:08.494140   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:08.533823   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:08.533838   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:08.571922   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:08.571933   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:08.585788   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:08.585801   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:08.609499   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:08.609507   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:08.624444   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:08.624454   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:08.637992   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:08.638002   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:08.649008   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:08.649021   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:08.666556   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:08.666567   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:08.678478   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:08.678493   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:08.682648   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:08.682654   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:08.696507   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:08.696518   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:08.711684   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:08.711694   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:08.723656   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:08.723669   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:08.735214   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:08.735226   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:08.753302   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:08.753312   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:08.790906   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:08.790918   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:11.304489   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:16.689280   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:16.689476   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:16.707000   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:16.707082   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:16.721898   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:16.721984   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:16.739101   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:16.739183   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:16.751089   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:16.751172   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:16.762998   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:16.763083   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:16.773725   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:16.773800   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:16.784048   13085 logs.go:282] 0 containers: []
	W1010 11:41:16.784059   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:16.784146   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:16.794804   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:16.794820   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:16.794826   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:16.811178   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:16.811189   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:16.834787   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:16.834799   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:16.846786   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:16.846823   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:16.881530   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:16.881543   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:16.886507   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:16.886518   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:16.926554   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:16.926571   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:16.940468   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:16.940478   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:16.952549   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:16.952560   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:16.967233   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:16.967243   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:16.979409   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:16.979421   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:16.990940   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:16.990955   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:17.008791   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:17.008802   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:16.307124   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:16.307419   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:16.330843   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:16.330955   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:16.347219   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:16.347323   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:16.364984   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:16.365064   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:16.376943   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:16.377016   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:16.387544   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:16.387608   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:16.398780   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:16.398862   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:16.410048   13221 logs.go:282] 0 containers: []
	W1010 11:41:16.410059   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:16.410127   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:16.421292   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:16.421315   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:16.421321   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:16.434162   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:16.434174   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:16.478695   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:16.478705   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:16.490446   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:16.490458   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:16.513859   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:16.513868   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:16.563272   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:16.563286   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:16.577337   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:16.577347   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:16.588839   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:16.588849   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:16.603192   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:16.603204   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:16.614822   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:16.614832   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:16.633658   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:16.633668   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:16.645012   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:16.645024   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:16.686432   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:16.686458   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:16.691645   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:16.691657   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:16.707365   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:16.707377   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:16.730466   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:16.730480   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:16.745314   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:16.745325   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:19.522748   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:19.260864   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:24.525244   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:24.525376   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:24.537970   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:24.538050   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:24.551769   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:24.551847   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:24.562731   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:24.562804   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:24.573258   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:24.573339   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:24.585853   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:24.585931   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:24.597906   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:24.597992   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:24.610097   13085 logs.go:282] 0 containers: []
	W1010 11:41:24.610107   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:24.610177   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:24.621268   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:24.621283   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:24.621289   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:24.636027   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:24.636043   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:24.649438   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:24.649449   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:24.662287   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:24.662298   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:24.680907   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:24.680918   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:24.697065   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:24.697076   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:24.708691   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:24.708701   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:24.719867   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:24.719877   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:24.743122   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:24.743128   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:24.777129   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:24.777144   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:24.783419   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:24.783427   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:24.820662   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:24.820672   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:24.835017   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:24.835027   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:27.348033   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:24.263247   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:24.263417   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:24.278257   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:24.278356   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:24.289982   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:24.290067   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:24.300961   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:24.301039   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:24.311473   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:24.311554   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:24.326021   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:24.326105   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:24.336379   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:24.336457   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:24.346660   13221 logs.go:282] 0 containers: []
	W1010 11:41:24.346670   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:24.346746   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:24.356965   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:24.356980   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:24.356985   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:24.373766   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:24.373775   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:24.388146   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:24.388156   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:24.399280   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:24.399289   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:24.416870   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:24.416883   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:24.428964   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:24.428980   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:24.440937   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:24.440949   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:24.480620   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:24.480630   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:24.494163   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:24.494177   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:24.508717   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:24.508727   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:24.520889   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:24.520901   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:24.533631   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:24.533643   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:24.538466   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:24.538476   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:24.587021   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:24.587029   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:24.629215   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:24.629228   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:24.642674   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:24.642689   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:24.664417   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:24.664426   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:27.191407   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:32.349722   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:32.349840   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:32.363231   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:32.363316   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:32.374795   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:32.374874   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:32.386214   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:32.386296   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:32.397414   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:32.397488   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:32.408724   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:32.408808   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:32.193650   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:32.193882   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:32.210675   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:32.210779   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:32.223819   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:32.223901   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:32.234720   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:32.234799   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:32.245392   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:32.245472   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:32.263208   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:32.263290   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:32.274161   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:32.274245   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:32.283915   13221 logs.go:282] 0 containers: []
	W1010 11:41:32.283926   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:32.283993   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:32.294457   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:32.294473   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:32.294478   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:32.317362   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:32.317369   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:32.351743   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:32.351754   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:32.368506   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:32.368522   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:32.380951   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:32.380965   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:32.396953   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:32.396966   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:32.409815   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:32.409826   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:32.425398   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:32.425409   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:32.440107   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:32.440117   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:32.453259   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:32.453271   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:32.494873   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:32.494888   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:32.499423   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:32.499431   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:32.514205   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:32.514217   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:32.529538   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:32.529553   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:32.546675   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:32.546689   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:32.420548   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:32.420632   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:32.433950   13085 logs.go:282] 0 containers: []
	W1010 11:41:32.433964   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:32.434034   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:32.444779   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:32.444796   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:32.444802   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:32.457208   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:32.457219   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:32.462521   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:32.462530   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:32.477512   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:32.477523   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:32.492219   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:32.492230   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:32.505024   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:32.505037   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:32.519885   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:32.519897   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:32.533360   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:32.533371   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:32.553326   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:32.553339   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:32.590710   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:32.590719   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:32.629131   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:32.629141   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:32.647178   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:32.647189   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:32.667439   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:32.667449   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:35.193751   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:32.587094   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:32.587108   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:32.606285   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:32.606297   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:35.120798   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:40.195723   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:40.195800   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:40.208018   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:40.208096   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:40.219159   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:40.219237   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:40.230212   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:40.230290   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:40.241896   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:40.241975   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:40.253991   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:40.254073   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:40.265883   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:40.265964   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:40.276818   13085 logs.go:282] 0 containers: []
	W1010 11:41:40.276829   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:40.276896   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:40.288453   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:40.288471   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:40.288477   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:40.305556   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:40.305567   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:40.319908   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:40.319919   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:40.331754   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:40.331767   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:40.347668   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:40.347678   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:40.366308   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:40.366320   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:40.391418   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:40.391431   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:40.403477   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:40.403488   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:40.439244   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:40.439258   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:40.477519   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:40.477530   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:40.492792   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:40.492804   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:40.505599   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:40.505612   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:40.530214   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:40.530225   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:40.123155   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:40.123404   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:40.139619   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:40.139731   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:40.152308   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:40.152396   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:40.163704   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:40.163783   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:40.176235   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:40.176313   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:40.190690   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:40.190769   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:40.201592   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:40.201668   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:40.212990   13221 logs.go:282] 0 containers: []
	W1010 11:41:40.213003   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:40.213072   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:40.224732   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:40.224755   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:40.224761   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:40.229174   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:40.229185   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:40.245202   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:40.245215   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:40.258284   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:40.258295   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:40.276489   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:40.276507   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:40.289608   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:40.289618   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:40.305987   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:40.305996   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:40.347565   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:40.347578   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:40.384993   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:40.385004   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:40.430487   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:40.430498   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:40.445186   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:40.445197   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:40.461705   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:40.461717   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:40.474617   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:40.474630   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:40.499890   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:40.499912   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:40.512926   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:40.512945   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:40.528606   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:40.528616   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:40.543090   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:40.543100   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:43.037083   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:43.057105   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:48.039356   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:48.039696   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:48.062277   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:48.062381   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:48.078729   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:48.078819   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:48.092574   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:48.092659   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:48.104237   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:48.104314   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:48.115623   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:48.115703   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:48.127280   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:48.127357   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:48.139534   13085 logs.go:282] 0 containers: []
	W1010 11:41:48.139542   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:48.139579   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:48.150912   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:48.150923   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:48.150928   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:48.167667   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:48.167682   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:48.180601   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:48.180613   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:48.196334   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:48.196342   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:48.209227   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:48.209239   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:48.248177   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:48.248191   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:48.253339   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:48.253347   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:48.268564   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:48.268573   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:48.291863   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:48.291874   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:48.304579   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:48.304591   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:48.323553   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:48.323566   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:48.336691   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:48.336703   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:48.363068   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:48.363081   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:50.902699   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:48.059214   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:48.059365   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:48.079183   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:48.079234   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:48.094191   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:48.094258   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:48.105885   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:48.105953   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:48.117102   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:48.117186   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:48.139164   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:48.139245   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:48.150675   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:48.150757   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:48.162102   13221 logs.go:282] 0 containers: []
	W1010 11:41:48.162114   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:48.162186   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:48.176915   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:48.176934   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:48.176939   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:48.193969   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:48.193979   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:48.206931   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:48.206945   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:48.225564   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:48.225578   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:48.238961   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:48.238980   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:48.251206   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:48.251219   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:48.264482   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:48.264496   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:48.306707   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:48.306717   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:48.344617   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:48.344629   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:48.358714   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:48.358726   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:48.386068   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:48.386081   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:48.401067   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:48.401079   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:48.415781   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:48.415795   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:48.427584   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:48.427596   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:48.446255   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:48.446265   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:48.450512   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:48.450519   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:48.488115   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:48.488125   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:51.003674   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:55.905117   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:55.905605   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:55.936465   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:41:55.936617   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:55.955473   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:41:55.955581   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:55.970030   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:41:55.970116   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:55.981858   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:41:55.981938   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:55.993719   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:41:55.993797   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:56.004290   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:41:56.004357   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:56.015535   13085 logs.go:282] 0 containers: []
	W1010 11:41:56.015569   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:56.015638   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:56.027574   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:41:56.027590   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:41:56.027596   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:41:56.040661   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:41:56.040673   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:41:56.053518   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:41:56.053532   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:41:56.066498   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:41:56.066508   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:56.078859   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:56.078869   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:56.083791   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:41:56.083799   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:41:56.099416   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:41:56.099432   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:41:56.114742   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:41:56.114755   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:41:56.132782   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:41:56.132793   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:41:56.151236   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:56.151248   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:56.177829   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:56.177847   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:56.214041   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:56.214054   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:56.251964   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:41:56.251975   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:41:56.004247   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:56.004357   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:56.020999   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:56.021077   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:56.032278   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:56.032353   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:56.044358   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:56.044436   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:56.055789   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:56.055869   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:56.070689   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:56.070769   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:56.082282   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:56.082382   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:56.093694   13221 logs.go:282] 0 containers: []
	W1010 11:41:56.093705   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:56.093774   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:56.105947   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:56.105965   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:56.105970   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:56.145920   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:56.145933   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:56.163973   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:56.163990   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:56.176114   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:56.176126   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:56.195844   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:56.195857   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:56.200224   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:56.200231   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:56.214980   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:56.214988   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:56.255713   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:56.255727   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:56.268093   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:56.268103   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:56.279806   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:56.279816   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:56.297529   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:56.297542   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:56.333572   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:56.333586   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:56.345468   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:56.345478   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:56.370729   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:56.370741   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:56.385523   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:56.385536   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:56.400576   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:56.400588   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:56.414695   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:56.414705   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:58.769967   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:58.928554   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:03.772207   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:03.772465   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:03.793930   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:03.794041   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:03.808425   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:03.808516   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:03.820567   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:42:03.820640   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:03.830984   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:03.831061   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:03.841570   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:03.841659   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:03.852463   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:03.852546   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:03.863565   13085 logs.go:282] 0 containers: []
	W1010 11:42:03.863578   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:03.863641   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:03.874134   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:03.874148   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:03.874155   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:03.885221   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:03.885232   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:03.919173   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:03.919183   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:03.923579   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:03.923586   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:03.938262   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:03.938278   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:03.951004   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:03.951016   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:03.963332   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:03.963346   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:03.976165   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:03.976177   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:04.001659   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:04.001674   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:04.039698   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:04.039710   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:04.056432   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:04.056446   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:04.079298   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:04.079306   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:04.098272   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:04.098284   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:06.623664   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:03.930731   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:03.930840   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:03.942655   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:03.942762   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:03.954199   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:03.954280   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:03.965962   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:03.966047   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:03.977329   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:03.977409   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:03.991072   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:03.991151   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:04.002558   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:04.002641   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:04.013681   13221 logs.go:282] 0 containers: []
	W1010 11:42:04.013692   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:04.013762   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:04.025769   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:04.025789   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:04.025794   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:04.064765   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:04.064781   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:04.077331   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:04.077343   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:04.117054   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:04.117069   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:04.132191   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:04.132205   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:04.144696   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:04.144708   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:04.157122   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:04.157137   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:04.176454   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:04.176469   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:04.181232   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:04.181239   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:04.198044   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:04.198060   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:04.239593   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:04.239605   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:04.254294   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:04.254304   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:04.265916   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:04.265928   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:04.278158   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:04.278175   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:04.289366   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:04.289381   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:04.313778   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:04.313786   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:04.328582   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:04.328593   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:06.848864   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:11.625877   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:11.626129   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:11.645798   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:11.645899   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:11.659946   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:11.660031   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:11.672104   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:42:11.672186   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:11.683048   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:11.683122   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:11.693972   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:11.694041   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:11.704950   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:11.705028   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:11.715304   13085 logs.go:282] 0 containers: []
	W1010 11:42:11.715317   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:11.715383   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:11.731875   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:11.731892   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:11.731898   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:11.736830   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:11.736836   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:11.774585   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:11.774596   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:11.788189   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:11.788202   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:11.809569   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:11.809580   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:11.821264   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:11.821277   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:11.845151   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:11.845161   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:11.880609   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:11.880630   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:11.896397   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:11.896406   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:11.909172   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:11.909190   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:11.921490   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:11.921502   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:11.933774   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:11.933785   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:11.952165   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:11.952175   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:11.849207   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:11.849305   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:11.861052   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:11.861138   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:11.872039   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:11.872116   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:11.883021   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:11.883093   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:11.894294   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:11.894377   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:11.905816   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:11.905895   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:11.917355   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:11.917431   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:11.935377   13221 logs.go:282] 0 containers: []
	W1010 11:42:11.935386   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:11.935454   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:11.948654   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:11.948672   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:11.948677   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:11.986931   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:11.986947   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:11.991302   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:11.991308   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:12.028820   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:12.028830   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:12.042443   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:12.042453   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:12.057978   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:12.057989   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:12.070271   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:12.070281   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:12.082135   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:12.082145   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:12.120549   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:12.120562   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:12.133089   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:12.133102   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:12.147519   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:12.147529   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:12.161899   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:12.161908   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:12.173384   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:12.173396   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:12.197822   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:12.197835   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:12.213107   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:12.213117   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:12.230338   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:12.230349   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:12.244276   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:12.244285   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:14.466878   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:14.757438   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:19.469263   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:19.469590   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:19.496055   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:19.496198   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:19.514393   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:19.514498   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:19.529532   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:42:19.529615   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:19.541748   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:19.541830   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:19.552478   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:19.552557   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:19.563137   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:19.563206   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:19.573481   13085 logs.go:282] 0 containers: []
	W1010 11:42:19.573490   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:19.573552   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:19.583894   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:19.583910   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:19.583917   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:19.595494   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:19.595504   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:19.612993   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:19.613004   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:19.629132   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:19.629141   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:19.652700   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:19.652710   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:19.688972   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:19.688986   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:19.702028   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:19.702041   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:19.716412   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:19.716423   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:19.730048   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:19.730057   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:19.741583   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:19.741597   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:19.753974   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:19.753986   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:19.789940   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:19.789954   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:19.795298   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:19.795311   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:22.313061   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:19.758494   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:19.758586   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:19.769935   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:19.770015   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:19.781282   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:19.781361   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:19.792783   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:19.792878   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:19.804428   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:19.804510   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:19.815899   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:19.815968   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:19.828077   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:19.828155   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:19.838631   13221 logs.go:282] 0 containers: []
	W1010 11:42:19.838643   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:19.838702   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:19.849491   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:19.849508   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:19.849514   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:19.864556   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:19.864566   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:19.878742   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:19.878752   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:19.889655   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:19.889667   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:19.913620   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:19.913628   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:19.952456   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:19.952470   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:19.967412   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:19.967421   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:19.981777   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:19.981787   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:19.997843   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:19.997855   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:20.010097   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:20.010112   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:20.014656   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:20.014663   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:20.057509   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:20.057523   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:20.069365   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:20.069381   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:20.086559   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:20.086572   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:20.123753   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:20.123767   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:20.138923   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:20.138937   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:20.153635   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:20.153646   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:27.314395   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:27.314516   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:27.325869   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:27.325952   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:27.335791   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:27.335868   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:27.346071   13085 logs.go:282] 2 containers: [28cfc4235f98 f111889abf6e]
	I1010 11:42:27.346137   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:27.356581   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:27.356649   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:27.367589   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:27.367667   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:27.378521   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:27.378593   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:27.389215   13085 logs.go:282] 0 containers: []
	W1010 11:42:27.389229   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:27.389301   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:27.399518   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:27.399537   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:27.399543   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:22.667674   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:27.410970   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:27.410982   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:27.429054   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:27.429065   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:27.440364   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:27.440375   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:27.463539   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:27.463548   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:27.498690   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:27.498697   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:27.512742   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:27.512756   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:27.528689   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:27.528700   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:27.546223   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:27.546234   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:27.558419   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:27.558432   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:27.563704   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:27.563711   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:27.630943   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:27.630955   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:27.647352   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:27.647367   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:30.179650   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:27.669844   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:27.670053   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:27.683262   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:27.683352   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:27.696720   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:27.696815   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:27.708678   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:27.708784   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:27.721412   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:27.721501   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:27.733100   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:27.733182   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:27.744152   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:27.744240   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:27.755070   13221 logs.go:282] 0 containers: []
	W1010 11:42:27.755080   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:27.755149   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:27.767764   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:27.767781   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:27.767787   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:27.783108   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:27.783123   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:27.794881   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:27.794896   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:27.806644   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:27.806658   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:27.821232   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:27.821246   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:27.832857   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:27.832867   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:27.846290   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:27.846302   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:27.868675   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:27.868681   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:27.904603   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:27.904617   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:27.909350   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:27.909355   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:27.923013   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:27.923027   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:27.963108   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:27.963123   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:27.975289   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:27.975304   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:27.986204   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:27.986214   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:27.998426   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:27.998434   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:28.037772   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:28.037782   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:28.055099   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:28.055110   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:30.572627   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:35.181193   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:35.181679   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:35.221573   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:35.221757   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:35.244588   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:35.244692   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:35.260354   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:42:35.260450   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:35.277489   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:35.277575   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:35.292485   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:35.292579   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:35.304853   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:35.304950   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:35.315224   13085 logs.go:282] 0 containers: []
	W1010 11:42:35.315235   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:35.315291   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:35.326202   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:35.326222   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:35.326228   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:35.345830   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:42:35.345839   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:42:35.357649   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:35.357661   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:35.373440   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:35.373451   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:35.395815   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:35.395825   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:35.400287   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:35.400293   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:35.411717   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:35.411726   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:35.423306   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:35.423316   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:35.434987   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:35.434997   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:35.470615   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:35.470625   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:35.484934   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:42:35.484944   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:42:35.498961   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:35.498971   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:35.510923   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:35.510932   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:35.524437   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:35.524449   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:35.550959   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:35.550968   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:35.574870   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:35.574982   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:35.586943   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:35.587026   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:35.597835   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:35.597927   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:35.608553   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:35.608629   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:35.622051   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:35.622136   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:35.632730   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:35.632799   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:35.644023   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:35.644101   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:35.653819   13221 logs.go:282] 0 containers: []
	W1010 11:42:35.653834   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:35.653901   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:35.664354   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:35.664371   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:35.664377   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:35.676541   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:35.676552   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:35.716129   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:35.716139   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:35.731377   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:35.731388   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:35.747004   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:35.747015   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:35.760893   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:35.760906   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:35.784831   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:35.784839   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:35.822152   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:35.822163   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:35.836340   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:35.836349   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:35.848048   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:35.848061   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:35.859652   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:35.859663   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:35.864227   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:35.864234   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:35.900944   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:35.900954   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:35.915026   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:35.915036   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:35.928740   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:35.928753   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:35.941065   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:35.941074   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:35.959979   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:35.959989   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:38.090025   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:38.477622   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:43.092433   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:43.092702   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:43.114924   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:43.115028   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:43.130618   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:43.130713   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:43.143313   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:42:43.143391   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:43.154640   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:43.154713   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:43.169864   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:43.169939   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:43.180932   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:43.181015   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:43.191209   13085 logs.go:282] 0 containers: []
	W1010 11:42:43.191219   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:43.191283   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:43.202276   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:43.202293   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:43.202299   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:43.220141   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:42:43.220151   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:42:43.232132   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:43.232142   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:43.246139   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:43.246150   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:43.261334   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:43.261347   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:43.280716   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:43.280726   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:43.314790   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:43.314804   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:43.368738   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:42:43.368750   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:42:43.380602   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:43.380614   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:43.392871   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:43.392881   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:43.397646   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:43.397652   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:43.409403   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:43.409413   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:43.431493   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:43.431506   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:43.444023   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:43.444036   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:43.469667   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:43.469678   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:45.985455   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:43.479812   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:43.479945   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:43.490947   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:43.491031   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:43.505505   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:43.505583   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:43.522773   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:43.522853   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:43.533833   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:43.533923   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:43.544232   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:43.544312   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:43.555180   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:43.555259   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:43.565258   13221 logs.go:282] 0 containers: []
	W1010 11:42:43.565273   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:43.565340   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:43.582276   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:43.582299   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:43.582304   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:43.620708   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:43.620720   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:43.634635   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:43.634644   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:43.646578   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:43.646591   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:43.650826   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:43.650832   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:43.664787   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:43.664797   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:43.679305   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:43.679314   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:43.690331   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:43.690344   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:43.727067   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:43.727077   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:43.767201   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:43.767215   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:43.778524   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:43.778534   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:43.795809   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:43.795823   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:43.819599   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:43.819606   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:43.832529   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:43.832540   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:43.847203   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:43.847213   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:43.862452   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:43.862461   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:43.879194   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:43.879204   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:46.393764   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:50.988184   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:50.988727   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:51.027864   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:51.028027   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:51.050437   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:51.050556   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:51.065264   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:42:51.065356   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:51.080719   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:51.080793   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:51.091684   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:51.091763   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:51.102307   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:51.102390   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:51.112944   13085 logs.go:282] 0 containers: []
	W1010 11:42:51.112957   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:51.113027   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:51.124383   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:51.124401   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:51.124406   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:51.129039   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:42:51.129045   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:42:51.140557   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:51.140568   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:51.156228   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:51.156241   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:51.167216   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:51.167227   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:51.192303   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:51.192311   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:51.227770   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:42:51.227779   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:42:51.240776   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:51.240787   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:51.252856   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:51.252870   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:51.268974   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:51.268985   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:51.281607   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:51.281617   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:51.293410   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:51.293419   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:51.329676   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:51.329686   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:51.344136   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:51.344147   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:51.358218   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:51.358229   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:42:51.395982   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:51.396088   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:51.419272   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:51.419353   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:51.437581   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:51.437664   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:51.448686   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:51.448771   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:51.459898   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:51.459980   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:51.470211   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:51.470286   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:51.481675   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:51.481766   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:51.491741   13221 logs.go:282] 0 containers: []
	W1010 11:42:51.491755   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:51.491814   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:51.502353   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:51.502371   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:51.502376   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:51.515042   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:51.515053   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:51.532981   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:51.532991   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:51.546605   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:51.546615   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:51.559012   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:51.559023   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:51.596415   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:51.596423   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:51.610514   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:51.610527   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:51.624151   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:51.624163   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:51.639470   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:51.639479   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:51.653782   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:51.653792   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:51.664898   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:51.664909   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:51.687510   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:51.687520   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:51.698867   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:51.698876   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:51.722296   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:51.722307   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:51.726652   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:51.726661   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:51.767542   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:51.767552   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:51.780605   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:51.780640   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:53.880732   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:54.317788   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:58.883090   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:58.883325   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:58.902353   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:42:58.902470   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:58.917211   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:42:58.917300   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:58.929517   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:42:58.929593   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:58.940163   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:42:58.940235   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:58.950390   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:42:58.950470   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:58.960970   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:42:58.961045   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:58.971424   13085 logs.go:282] 0 containers: []
	W1010 11:42:58.971441   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:58.971512   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:58.982436   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:42:58.982454   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:58.982461   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:59.019049   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:59.019062   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:59.058369   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:42:59.058381   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:42:59.071108   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:42:59.071123   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:42:59.083259   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:59.083275   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:59.106785   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:42:59.106793   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:42:59.121508   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:42:59.121521   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:42:59.135613   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:42:59.135626   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:42:59.147491   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:42:59.147501   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:42:59.164503   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:42:59.164513   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:42:59.175414   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:42:59.175427   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:59.186958   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:59.186975   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:59.191365   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:42:59.191372   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:42:59.205319   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:42:59.205329   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:42:59.216829   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:42:59.216839   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:01.736871   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:59.319987   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:59.320151   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:59.332196   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:59.332282   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:59.349989   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:59.350073   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:59.362522   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:59.362600   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:59.373975   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:59.374058   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:59.384871   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:59.384958   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:59.396356   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:59.396433   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:59.410538   13221 logs.go:282] 0 containers: []
	W1010 11:42:59.410550   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:59.410623   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:59.421817   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:59.421835   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:59.421840   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:59.436505   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:59.436515   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:59.476243   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:59.476253   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:59.487843   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:59.487854   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:59.500650   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:59.500660   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:59.512946   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:59.512960   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:59.536897   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:59.536910   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:59.576529   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:59.576538   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:59.580875   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:59.580883   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:59.592447   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:59.592459   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:59.605936   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:59.605950   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:59.617504   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:59.617515   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:59.631975   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:59.631989   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:59.644373   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:59.644383   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:59.659476   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:59.659490   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:59.678083   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:59.678098   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:59.712489   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:59.712503   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:43:02.228722   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:06.739132   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:06.739304   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:06.752456   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:06.752538   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:06.763435   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:06.763515   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:06.774213   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:06.774293   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:06.784523   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:06.784603   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:06.795149   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:06.795223   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:06.805192   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:06.805269   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:06.815955   13085 logs.go:282] 0 containers: []
	W1010 11:43:06.815965   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:06.816024   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:06.826809   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:06.826830   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:06.826836   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:06.862280   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:06.862288   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:06.877398   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:06.877408   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:06.891752   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:06.891762   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:06.903301   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:06.903312   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:06.922078   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:06.922088   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:06.933922   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:06.933933   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:06.948391   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:06.948400   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:06.960567   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:06.960577   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:06.965501   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:06.965509   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:07.001329   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:07.001341   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:07.013750   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:07.013764   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:07.025562   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:07.025572   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:07.038958   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:07.038969   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:07.056577   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:07.056591   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:07.231034   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:07.231162   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:07.244507   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:43:07.244598   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:07.257899   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:43:07.257972   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:07.268619   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:43:07.268697   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:07.279904   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:43:07.279987   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:07.290673   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:43:07.290758   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:07.305662   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:43:07.305737   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:07.316111   13221 logs.go:282] 0 containers: []
	W1010 11:43:07.316128   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:07.316194   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:07.326948   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:43:07.326965   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:43:07.326970   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:43:07.338253   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:43:07.338267   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:43:07.359269   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:43:07.359283   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:43:07.373489   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:07.373503   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:07.377525   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:43:07.377533   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:43:07.415973   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:43:07.415986   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:43:07.432723   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:43:07.432735   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:43:07.445827   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:43:07.445839   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:43:07.457891   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:43:07.457903   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:07.470261   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:07.470271   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:07.509269   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:43:07.509291   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:43:07.526854   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:07.526865   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:07.548938   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:07.548945   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:09.584611   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:07.584374   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:43:07.584384   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:43:07.602275   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:43:07.602285   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:43:07.616221   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:43:07.616231   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:43:07.635395   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:43:07.635405   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:43:10.148510   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:14.586905   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:14.587096   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:14.598962   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:14.599049   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:14.614011   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:14.614091   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:14.624571   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:14.624647   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:14.634890   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:14.634969   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:14.652015   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:14.652094   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:14.666717   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:14.666792   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:14.677266   13085 logs.go:282] 0 containers: []
	W1010 11:43:14.677277   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:14.677341   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:14.687846   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:14.687865   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:14.687886   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:14.699960   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:14.699971   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:14.711078   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:14.711089   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:14.736108   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:14.736118   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:14.750418   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:14.750430   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:14.763619   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:14.763634   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:14.775210   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:14.775224   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:14.790174   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:14.790184   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:14.801843   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:14.801854   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:14.825116   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:14.825127   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:14.837169   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:14.837179   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:14.871858   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:14.871872   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:14.876656   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:14.876664   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:14.900682   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:14.900693   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:14.912888   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:14.912902   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:15.150807   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:15.151025   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:15.165589   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:43:15.165689   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:15.177710   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:43:15.177794   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:15.188367   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:43:15.188444   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:15.199721   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:43:15.199807   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:15.214935   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:43:15.215005   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:15.225990   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:43:15.226061   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:15.236191   13221 logs.go:282] 0 containers: []
	W1010 11:43:15.236208   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:15.236276   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:15.247185   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:43:15.247200   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:43:15.247206   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:43:15.262224   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:43:15.262235   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:43:15.276368   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:15.276377   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:15.299732   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:43:15.299740   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:43:15.321294   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:43:15.321303   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:43:15.337795   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:15.337810   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:15.376671   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:15.376681   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:15.380887   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:15.380892   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:15.414663   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:43:15.414673   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:43:15.426814   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:43:15.426826   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:43:15.439507   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:43:15.439517   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:43:15.451873   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:43:15.451885   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:43:15.466540   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:43:15.466553   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:15.479046   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:43:15.479056   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:43:15.523101   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:43:15.523114   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:43:15.537565   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:43:15.537582   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:43:15.566041   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:43:15.566056   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:43:17.448643   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:18.086562   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:22.451208   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:22.451520   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:22.479308   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:22.479451   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:22.497004   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:22.497098   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:22.511117   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:22.511205   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:22.524967   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:22.525036   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:22.535608   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:22.535673   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:22.546152   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:22.546236   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:22.556371   13085 logs.go:282] 0 containers: []
	W1010 11:43:22.556384   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:22.556449   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:22.582613   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:22.582630   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:22.582637   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:22.587001   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:22.587008   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:22.601194   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:22.601205   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:22.615789   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:22.615800   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:22.627300   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:22.627311   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:22.638413   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:22.638423   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:22.672170   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:22.672177   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:22.684099   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:22.684112   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:22.709149   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:22.709157   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:22.720871   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:22.720885   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:22.733841   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:22.733855   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:22.749378   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:22.749392   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:22.761078   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:22.761092   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:22.773699   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:22.773710   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:22.791077   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:22.791091   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:25.327762   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:23.088838   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:23.089057   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:23.107141   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:43:23.107246   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:23.119885   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:43:23.119965   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:23.134622   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:43:23.134704   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:23.145193   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:43:23.145274   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:23.155666   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:43:23.155745   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:23.166374   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:43:23.166453   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:23.176064   13221 logs.go:282] 0 containers: []
	W1010 11:43:23.176079   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:23.176142   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:23.186804   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:43:23.186819   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:43:23.186825   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:23.199878   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:23.199888   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:23.234134   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:43:23.234149   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:43:23.253473   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:43:23.253484   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:43:23.274085   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:43:23.274097   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:43:23.298719   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:23.298735   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:23.322003   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:43:23.322013   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:43:23.339455   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:43:23.339469   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:43:23.350818   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:43:23.350827   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:43:23.363190   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:23.363199   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:23.400607   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:23.400623   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:23.404885   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:43:23.404891   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:43:23.472516   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:43:23.472528   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:43:23.488377   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:43:23.488386   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:43:23.502285   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:43:23.502294   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:43:23.513956   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:43:23.513968   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:43:23.531753   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:43:23.531763   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:43:26.047523   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:31.049819   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:31.049883   13221 kubeadm.go:597] duration metric: took 4m3.9252285s to restartPrimaryControlPlane
	W1010 11:43:31.049956   13221 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 11:43:31.049984   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1010 11:43:32.086614   13221 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.036629584s)
	I1010 11:43:32.086685   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 11:43:32.092003   13221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 11:43:32.094936   13221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 11:43:32.097596   13221 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 11:43:32.097602   13221 kubeadm.go:157] found existing configuration files:
	
	I1010 11:43:32.097635   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/admin.conf
	I1010 11:43:32.100097   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 11:43:32.100125   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 11:43:32.103619   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/kubelet.conf
	I1010 11:43:32.106795   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 11:43:32.106821   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 11:43:32.109858   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/controller-manager.conf
	I1010 11:43:32.112296   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 11:43:32.112324   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 11:43:32.115263   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/scheduler.conf
	I1010 11:43:32.118401   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 11:43:32.118426   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 11:43:32.121341   13221 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 11:43:32.139680   13221 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1010 11:43:32.139828   13221 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 11:43:32.190251   13221 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 11:43:32.190317   13221 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 11:43:32.190366   13221 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 11:43:32.239141   13221 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 11:43:30.330110   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:30.330290   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:30.341264   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:30.341353   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:30.352164   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:30.352243   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:30.366656   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:30.366739   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:30.377151   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:30.377225   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:30.387851   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:30.387929   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:30.406382   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:30.406463   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:30.416627   13085 logs.go:282] 0 containers: []
	W1010 11:43:30.416642   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:30.416707   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:30.427209   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:30.427228   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:30.427235   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:30.438224   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:30.438233   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:30.450305   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:30.450317   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:30.475179   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:30.475188   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:30.487148   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:30.487159   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:30.509132   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:30.509143   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:30.514573   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:30.514582   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:30.550238   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:30.550249   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:30.565911   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:30.565921   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:30.577770   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:30.577781   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:30.589405   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:30.589417   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:30.624317   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:30.624338   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:30.639266   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:30.639277   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:30.650860   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:30.650872   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:30.672298   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:30.672314   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:32.242257   13221 out.go:235]   - Generating certificates and keys ...
	I1010 11:43:32.242290   13221 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 11:43:32.242323   13221 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 11:43:32.242364   13221 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 11:43:32.242398   13221 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 11:43:32.242435   13221 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 11:43:32.242474   13221 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 11:43:32.242513   13221 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 11:43:32.242553   13221 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 11:43:32.242604   13221 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 11:43:32.242652   13221 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 11:43:32.242675   13221 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 11:43:32.242701   13221 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 11:43:32.326327   13221 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 11:43:32.445249   13221 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 11:43:32.537370   13221 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 11:43:32.592360   13221 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 11:43:32.623577   13221 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 11:43:32.623958   13221 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 11:43:32.624015   13221 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 11:43:32.715618   13221 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 11:43:33.185843   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:32.719596   13221 out.go:235]   - Booting up control plane ...
	I1010 11:43:32.719647   13221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 11:43:32.719683   13221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 11:43:32.719713   13221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 11:43:32.719758   13221 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 11:43:32.719872   13221 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 11:43:37.218193   13221 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504063 seconds
	I1010 11:43:37.218285   13221 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 11:43:37.221843   13221 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 11:43:37.741350   13221 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 11:43:37.741638   13221 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-616000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 11:43:38.247856   13221 kubeadm.go:310] [bootstrap-token] Using token: 6se1ez.f9ly5chl6izab28p
	I1010 11:43:38.256798   13221 out.go:235]   - Configuring RBAC rules ...
	I1010 11:43:38.256879   13221 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 11:43:38.259912   13221 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 11:43:38.262708   13221 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 11:43:38.263845   13221 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 11:43:38.264992   13221 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 11:43:38.266130   13221 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 11:43:38.270153   13221 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 11:43:38.462179   13221 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 11:43:38.661378   13221 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 11:43:38.661946   13221 kubeadm.go:310] 
	I1010 11:43:38.661978   13221 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 11:43:38.662021   13221 kubeadm.go:310] 
	I1010 11:43:38.662079   13221 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 11:43:38.662086   13221 kubeadm.go:310] 
	I1010 11:43:38.662097   13221 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 11:43:38.662132   13221 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 11:43:38.662178   13221 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 11:43:38.662185   13221 kubeadm.go:310] 
	I1010 11:43:38.662252   13221 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 11:43:38.662260   13221 kubeadm.go:310] 
	I1010 11:43:38.662318   13221 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 11:43:38.662322   13221 kubeadm.go:310] 
	I1010 11:43:38.662345   13221 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 11:43:38.662416   13221 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 11:43:38.662471   13221 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 11:43:38.662492   13221 kubeadm.go:310] 
	I1010 11:43:38.662530   13221 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 11:43:38.662628   13221 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 11:43:38.662633   13221 kubeadm.go:310] 
	I1010 11:43:38.662750   13221 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6se1ez.f9ly5chl6izab28p \
	I1010 11:43:38.662873   13221 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79bae548dd16cfe936348b462da3f6d7ee9037c9333f736d9e5d628396c7b6e1 \
	I1010 11:43:38.662887   13221 kubeadm.go:310] 	--control-plane 
	I1010 11:43:38.662892   13221 kubeadm.go:310] 
	I1010 11:43:38.662931   13221 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 11:43:38.662935   13221 kubeadm.go:310] 
	I1010 11:43:38.662986   13221 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6se1ez.f9ly5chl6izab28p \
	I1010 11:43:38.663065   13221 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79bae548dd16cfe936348b462da3f6d7ee9037c9333f736d9e5d628396c7b6e1 
	I1010 11:43:38.663152   13221 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 11:43:38.663161   13221 cni.go:84] Creating CNI manager for ""
	I1010 11:43:38.663169   13221 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:43:38.666874   13221 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 11:43:38.673850   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 11:43:38.676842   13221 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 11:43:38.681768   13221 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 11:43:38.681820   13221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 11:43:38.681839   13221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-616000 minikube.k8s.io/updated_at=2024_10_10T11_43_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=stopped-upgrade-616000 minikube.k8s.io/primary=true
	I1010 11:43:38.684907   13221 ops.go:34] apiserver oom_adj: -16
	I1010 11:43:38.727442   13221 kubeadm.go:1113] duration metric: took 45.666375ms to wait for elevateKubeSystemPrivileges
	I1010 11:43:38.727530   13221 kubeadm.go:394] duration metric: took 4m11.616681416s to StartCluster
	I1010 11:43:38.727543   13221 settings.go:142] acquiring lock: {Name:mkc38780b398d6ae1b1dc4b65b91e70a285222f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:43:38.727642   13221 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:43:38.728077   13221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/kubeconfig: {Name:mk76f18909e94718c05e51991d3ea4660849ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:43:38.728291   13221 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:43:38.728340   13221 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 11:43:38.728382   13221 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:43:38.728385   13221 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-616000"
	I1010 11:43:38.728392   13221 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-616000"
	W1010 11:43:38.728396   13221 addons.go:243] addon storage-provisioner should already be in state true
	I1010 11:43:38.728408   13221 host.go:66] Checking if "stopped-upgrade-616000" exists ...
	I1010 11:43:38.728416   13221 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-616000"
	I1010 11:43:38.728425   13221 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-616000"
	I1010 11:43:38.732675   13221 out.go:177] * Verifying Kubernetes components...
	I1010 11:43:38.733354   13221 kapi.go:59] client config for stopped-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102322a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 11:43:38.737095   13221 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-616000"
	W1010 11:43:38.737099   13221 addons.go:243] addon default-storageclass should already be in state true
	I1010 11:43:38.737106   13221 host.go:66] Checking if "stopped-upgrade-616000" exists ...
	I1010 11:43:38.737694   13221 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 11:43:38.737700   13221 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 11:43:38.737705   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1010 11:43:38.740811   13221 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:43:38.188121   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:38.188302   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:38.199997   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:38.200085   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:38.210370   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:38.210453   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:38.225733   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:38.225816   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:38.236631   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:38.236712   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:38.247187   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:38.247275   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:38.258963   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:38.259049   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:38.271399   13085 logs.go:282] 0 containers: []
	W1010 11:43:38.271410   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:38.271488   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:38.282547   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:38.282567   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:38.282574   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:38.321231   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:38.321246   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:38.337829   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:38.337843   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:38.374610   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:38.374624   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:38.386864   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:38.386878   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:38.398710   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:38.398721   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:38.423893   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:38.423900   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:38.428881   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:38.428887   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:38.440216   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:38.440232   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:38.454059   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:38.454074   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:38.469919   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:38.469938   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:38.482599   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:38.482611   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:38.498083   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:38.498094   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:38.518571   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:38.518586   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:38.533550   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:38.533562   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:41.051612   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:38.744871   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:43:38.750951   13221 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 11:43:38.750959   13221 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 11:43:38.750967   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1010 11:43:38.840320   13221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 11:43:38.846147   13221 api_server.go:52] waiting for apiserver process to appear ...
	I1010 11:43:38.846203   13221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:43:38.849998   13221 api_server.go:72] duration metric: took 121.698542ms to wait for apiserver process to appear ...
	I1010 11:43:38.850005   13221 api_server.go:88] waiting for apiserver healthz status ...
	I1010 11:43:38.850012   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:38.883825   13221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 11:43:38.903650   13221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 11:43:39.257790   13221 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1010 11:43:39.257801   13221 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1010 11:43:46.053935   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:46.054145   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:46.066102   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:46.066193   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:46.076401   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:46.076476   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:46.087035   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:46.087127   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:46.097422   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:46.097488   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:46.112845   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:46.112922   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:46.124059   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:46.124138   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:46.134302   13085 logs.go:282] 0 containers: []
	W1010 11:43:46.134319   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:46.134388   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:46.144957   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:46.144975   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:46.144981   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:46.156642   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:46.156653   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:46.179333   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:46.179341   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:46.190917   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:46.190928   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:46.195991   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:46.196000   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:46.208056   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:46.208067   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:46.223332   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:46.223343   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:46.240858   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:46.240869   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:46.275731   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:46.275741   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:46.290469   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:46.290479   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:46.302112   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:46.302122   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:46.339035   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:46.339047   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:46.353517   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:46.353528   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:46.366564   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:46.366575   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:46.378512   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:46.378522   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:43.852036   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:43.852066   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:48.890206   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:48.852498   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:48.852517   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:53.892319   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:53.892401   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:53.903958   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:43:53.904041   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:53.915613   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:43:53.915692   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:53.926624   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:43:53.926708   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:53.937321   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:43:53.937404   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:53.947779   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:43:53.947854   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:53.958770   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:43:53.958844   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:53.968778   13085 logs.go:282] 0 containers: []
	W1010 11:43:53.968790   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:53.968853   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:53.979588   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:43:53.979607   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:43:53.979613   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:43:53.997494   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:43:53.997505   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:43:54.009440   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:54.009451   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:54.013962   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:54.013970   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:54.048721   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:43:54.048736   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:43:54.062520   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:43:54.062530   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:54.076972   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:43:54.076983   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:43:54.089124   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:54.089138   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:54.112671   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:54.112681   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:54.147209   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:43:54.147217   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:43:54.158502   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:43:54.158512   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:43:54.170759   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:43:54.170770   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:43:54.187865   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:43:54.187876   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:54.200169   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:43:54.200182   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:43:54.214475   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:43:54.214486   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:43:56.728189   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:53.852832   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:53.852887   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:01.729510   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:01.729730   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:44:01.761138   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:44:01.761247   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:44:01.775623   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:44:01.775707   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:44:01.788599   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:44:01.788682   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:44:01.800796   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:44:01.800870   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:44:01.811563   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:44:01.811628   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:44:01.823256   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:44:01.823336   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:44:01.834523   13085 logs.go:282] 0 containers: []
	W1010 11:44:01.834533   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:44:01.834594   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:44:01.845738   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:44:01.845756   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:44:01.845761   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:44:01.858345   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:44:01.858359   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:44:01.870585   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:44:01.870598   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:44:01.882662   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:44:01.882673   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:44:01.894846   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:44:01.894860   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:44:01.918141   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:44:01.918149   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:44:01.930322   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:44:01.930333   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:44:01.934911   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:44:01.934920   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:44:01.949694   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:44:01.949704   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:44:01.961547   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:44:01.961558   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:44:01.979653   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:44:01.979667   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:44:02.014888   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:44:02.014896   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:44:02.034466   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:44:02.034476   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:44:02.069423   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:44:02.069434   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:44:02.081018   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:44:02.081027   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:43:58.853600   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:58.853638   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:04.598297   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:03.854309   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:03.854346   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:08.855222   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:08.855257   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1010 11:44:09.259915   13221 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1010 11:44:09.264277   13221 out.go:177] * Enabled addons: storage-provisioner
	I1010 11:44:09.600557   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:09.600782   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:44:09.622437   13085 logs.go:282] 1 containers: [6bcc3ab67cb5]
	I1010 11:44:09.622542   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:44:09.637941   13085 logs.go:282] 1 containers: [1c290cb5af04]
	I1010 11:44:09.638028   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:44:09.650296   13085 logs.go:282] 4 containers: [d6f3d6a35c5f c9e9e089b476 28cfc4235f98 f111889abf6e]
	I1010 11:44:09.650374   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:44:09.664638   13085 logs.go:282] 1 containers: [b8b2862ec0bc]
	I1010 11:44:09.664718   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:44:09.675578   13085 logs.go:282] 1 containers: [d18224ea6afb]
	I1010 11:44:09.675656   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:44:09.686076   13085 logs.go:282] 1 containers: [22fbb5338666]
	I1010 11:44:09.686153   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:44:09.697071   13085 logs.go:282] 0 containers: []
	W1010 11:44:09.697081   13085 logs.go:284] No container was found matching "kindnet"
	I1010 11:44:09.697143   13085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:44:09.707771   13085 logs.go:282] 1 containers: [ac4a6ae47f3d]
	I1010 11:44:09.707790   13085 logs.go:123] Gathering logs for coredns [c9e9e089b476] ...
	I1010 11:44:09.707796   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e9e089b476"
	I1010 11:44:09.719482   13085 logs.go:123] Gathering logs for coredns [28cfc4235f98] ...
	I1010 11:44:09.719495   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28cfc4235f98"
	I1010 11:44:09.731494   13085 logs.go:123] Gathering logs for storage-provisioner [ac4a6ae47f3d] ...
	I1010 11:44:09.731505   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac4a6ae47f3d"
	I1010 11:44:09.743317   13085 logs.go:123] Gathering logs for kubelet ...
	I1010 11:44:09.743328   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:44:09.777310   13085 logs.go:123] Gathering logs for dmesg ...
	I1010 11:44:09.777322   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:44:09.782393   13085 logs.go:123] Gathering logs for kube-apiserver [6bcc3ab67cb5] ...
	I1010 11:44:09.782407   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bcc3ab67cb5"
	I1010 11:44:09.798204   13085 logs.go:123] Gathering logs for etcd [1c290cb5af04] ...
	I1010 11:44:09.798219   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c290cb5af04"
	I1010 11:44:09.812411   13085 logs.go:123] Gathering logs for coredns [d6f3d6a35c5f] ...
	I1010 11:44:09.812421   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6f3d6a35c5f"
	I1010 11:44:09.824012   13085 logs.go:123] Gathering logs for kube-scheduler [b8b2862ec0bc] ...
	I1010 11:44:09.824026   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b2862ec0bc"
	I1010 11:44:09.839311   13085 logs.go:123] Gathering logs for Docker ...
	I1010 11:44:09.839325   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:44:09.862374   13085 logs.go:123] Gathering logs for container status ...
	I1010 11:44:09.862381   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:44:09.873568   13085 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:44:09.873581   13085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:44:09.909580   13085 logs.go:123] Gathering logs for coredns [f111889abf6e] ...
	I1010 11:44:09.909592   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f111889abf6e"
	I1010 11:44:09.921511   13085 logs.go:123] Gathering logs for kube-proxy [d18224ea6afb] ...
	I1010 11:44:09.921526   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d18224ea6afb"
	I1010 11:44:09.933770   13085 logs.go:123] Gathering logs for kube-controller-manager [22fbb5338666] ...
	I1010 11:44:09.933781   13085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22fbb5338666"
	I1010 11:44:09.270211   13221 addons.go:510] duration metric: took 30.542176625s for enable addons: enabled=[storage-provisioner]
	I1010 11:44:12.453343   13085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:17.455548   13085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:17.457446   13085 out.go:201] 
	W1010 11:44:17.462002   13085 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1010 11:44:17.462011   13085 out.go:270] * 
	W1010 11:44:17.462754   13085 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:44:17.473934   13085 out.go:201] 
	I1010 11:44:13.856323   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:13.856347   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:18.857670   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:18.857722   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:23.859644   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:23.859683   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:28.861913   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:28.861962   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-10-10 18:35:14 UTC, ends at Thu 2024-10-10 18:44:33 UTC. --
	Oct 10 18:44:17 running-upgrade-704000 dockerd[3236]: time="2024-10-10T18:44:17.755208395Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/03b2b9aa89656d9b2a415df4d561c54ebcc52b1a0bf14c27fa091680e5498cd1 pid=18972 runtime=io.containerd.runc.v2
	Oct 10 18:44:17 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:17Z" level=error msg="ContainerStats resp: {0x40005832c0 linux}"
	Oct 10 18:44:17 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:17Z" level=error msg="ContainerStats resp: {0x40008a5680 linux}"
	Oct 10 18:44:18 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:18Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 10 18:44:18 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:18Z" level=error msg="ContainerStats resp: {0x40008ddb80 linux}"
	Oct 10 18:44:20 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:20Z" level=error msg="ContainerStats resp: {0x4000359cc0 linux}"
	Oct 10 18:44:20 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:20Z" level=error msg="ContainerStats resp: {0x4000359ec0 linux}"
	Oct 10 18:44:20 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:20Z" level=error msg="ContainerStats resp: {0x40004843c0 linux}"
	Oct 10 18:44:20 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:20Z" level=error msg="ContainerStats resp: {0x4000667440 linux}"
	Oct 10 18:44:20 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:20Z" level=error msg="ContainerStats resp: {0x40004854c0 linux}"
	Oct 10 18:44:20 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:20Z" level=error msg="ContainerStats resp: {0x4000624000 linux}"
	Oct 10 18:44:20 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:20Z" level=error msg="ContainerStats resp: {0x4000624440 linux}"
	Oct 10 18:44:23 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:23Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 10 18:44:28 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:28Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 10 18:44:30 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:30Z" level=error msg="ContainerStats resp: {0x4000625440 linux}"
	Oct 10 18:44:30 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:30Z" level=error msg="ContainerStats resp: {0x4000358f00 linux}"
	Oct 10 18:44:31 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:31Z" level=error msg="ContainerStats resp: {0x40008dca80 linux}"
	Oct 10 18:44:32 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:32Z" level=error msg="ContainerStats resp: {0x40008dc040 linux}"
	Oct 10 18:44:32 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:32Z" level=error msg="ContainerStats resp: {0x40007786c0 linux}"
	Oct 10 18:44:32 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:32Z" level=error msg="ContainerStats resp: {0x40008dc940 linux}"
	Oct 10 18:44:32 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:32Z" level=error msg="ContainerStats resp: {0x40008dd000 linux}"
	Oct 10 18:44:32 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:32Z" level=error msg="ContainerStats resp: {0x40008dd400 linux}"
	Oct 10 18:44:32 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:32Z" level=error msg="ContainerStats resp: {0x40008dd880 linux}"
	Oct 10 18:44:32 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:32Z" level=error msg="ContainerStats resp: {0x40008ddcc0 linux}"
	Oct 10 18:44:33 running-upgrade-704000 cri-dockerd[3073]: time="2024-10-10T18:44:33Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	03b2b9aa89656       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   405e30ca6cf1d
	2dcde4e48aee5       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   253f0241620dc
	d6f3d6a35c5f6       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   405e30ca6cf1d
	c9e9e089b4766       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   253f0241620dc
	ac4a6ae47f3dc       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   d09534d0260dd
	d18224ea6afb2       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   f373936f9a13d
	22fbb53386668       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   99b9b7928789e
	b8b2862ec0bc2       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   405a8150cacaa
	6bcc3ab67cb51       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   c90e4fe0bb7f8
	1c290cb5af048       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   3a47d2c11b15c
	
	
	==> coredns [03b2b9aa8965] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7131660272732715854.2108492776806157378. HINFO: read udp 10.244.0.2:47832->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7131660272732715854.2108492776806157378. HINFO: read udp 10.244.0.2:49598->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7131660272732715854.2108492776806157378. HINFO: read udp 10.244.0.2:57668->10.0.2.3:53: i/o timeout
	
	
	==> coredns [2dcde4e48aee] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6470853702304912882.1786024150359207439. HINFO: read udp 10.244.0.3:46713->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6470853702304912882.1786024150359207439. HINFO: read udp 10.244.0.3:55507->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6470853702304912882.1786024150359207439. HINFO: read udp 10.244.0.3:54714->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c9e9e089b476] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1452311455833544532.8297404576537764392. HINFO: read udp 10.244.0.3:58934->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1452311455833544532.8297404576537764392. HINFO: read udp 10.244.0.3:58963->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1452311455833544532.8297404576537764392. HINFO: read udp 10.244.0.3:37423->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1452311455833544532.8297404576537764392. HINFO: read udp 10.244.0.3:51782->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1452311455833544532.8297404576537764392. HINFO: read udp 10.244.0.3:57719->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1452311455833544532.8297404576537764392. HINFO: read udp 10.244.0.3:37177->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1452311455833544532.8297404576537764392. HINFO: read udp 10.244.0.3:54654->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1452311455833544532.8297404576537764392. HINFO: read udp 10.244.0.3:41584->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1452311455833544532.8297404576537764392. HINFO: read udp 10.244.0.3:38191->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1452311455833544532.8297404576537764392. HINFO: read udp 10.244.0.3:46354->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d6f3d6a35c5f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2540045049299498212.5125548274142280038. HINFO: read udp 10.244.0.2:46889->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2540045049299498212.5125548274142280038. HINFO: read udp 10.244.0.2:47804->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2540045049299498212.5125548274142280038. HINFO: read udp 10.244.0.2:60563->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2540045049299498212.5125548274142280038. HINFO: read udp 10.244.0.2:44330->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2540045049299498212.5125548274142280038. HINFO: read udp 10.244.0.2:41818->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2540045049299498212.5125548274142280038. HINFO: read udp 10.244.0.2:60026->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2540045049299498212.5125548274142280038. HINFO: read udp 10.244.0.2:37665->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2540045049299498212.5125548274142280038. HINFO: read udp 10.244.0.2:36000->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2540045049299498212.5125548274142280038. HINFO: read udp 10.244.0.2:33989->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2540045049299498212.5125548274142280038. HINFO: read udp 10.244.0.2:39583->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-704000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-704000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=running-upgrade-704000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T11_40_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:40:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-704000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:44:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:40:16 +0000   Thu, 10 Oct 2024 18:40:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:40:16 +0000   Thu, 10 Oct 2024 18:40:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:40:16 +0000   Thu, 10 Oct 2024 18:40:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:40:16 +0000   Thu, 10 Oct 2024 18:40:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-704000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 85396955ffdd47b68272b7eb67d8f243
	  System UUID:                85396955ffdd47b68272b7eb67d8f243
	  Boot ID:                    eabc2c16-2891-4cdc-9bd3-f9ef5f8d7e84
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2k6qr                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-94bfj                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-704000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-704000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-704000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-lw7cv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-704000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-704000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-704000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-704000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-704000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-704000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-704000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-704000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-704000 event: Registered Node running-upgrade-704000 in Controller
	
	
	==> dmesg <==
	[  +1.980045] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.080996] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.088480] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.146438] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.092046] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.083479] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +2.675879] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[  +9.197824] systemd-fstab-generator[1943]: Ignoring "noauto" for root device
	[  +2.835364] systemd-fstab-generator[2221]: Ignoring "noauto" for root device
	[  +0.141603] systemd-fstab-generator[2255]: Ignoring "noauto" for root device
	[  +0.094056] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +0.097471] systemd-fstab-generator[2279]: Ignoring "noauto" for root device
	[ +13.290116] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.229276] systemd-fstab-generator[3029]: Ignoring "noauto" for root device
	[  +0.064825] systemd-fstab-generator[3041]: Ignoring "noauto" for root device
	[  +0.081071] systemd-fstab-generator[3052]: Ignoring "noauto" for root device
	[  +0.068322] systemd-fstab-generator[3066]: Ignoring "noauto" for root device
	[  +2.970709] systemd-fstab-generator[3223]: Ignoring "noauto" for root device
	[Oct10 18:36] systemd-fstab-generator[3732]: Ignoring "noauto" for root device
	[  +2.078595] systemd-fstab-generator[4421]: Ignoring "noauto" for root device
	[ +19.388298] kauditd_printk_skb: 68 callbacks suppressed
	[Oct10 18:37] kauditd_printk_skb: 21 callbacks suppressed
	[Oct10 18:40] systemd-fstab-generator[12127]: Ignoring "noauto" for root device
	[  +5.617545] systemd-fstab-generator[12735]: Ignoring "noauto" for root device
	[  +0.493059] systemd-fstab-generator[12864]: Ignoring "noauto" for root device
	
	
	==> etcd [1c290cb5af04] <==
	{"level":"info","ts":"2024-10-10T18:40:11.806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-10T18:40:11.806Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-10T18:40:11.808Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-10T18:40:11.808Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-10T18:40:11.808Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-10T18:40:11.808Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-10T18:40:11.808Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-10T18:40:12.206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-10T18:40:12.206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-10T18:40:12.206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-10T18:40:12.206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-10T18:40:12.206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-10T18:40:12.206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-10T18:40:12.206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-10T18:40:12.207Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T18:40:12.208Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-704000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-10T18:40:12.208Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T18:40:12.208Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-10T18:40:12.208Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-10T18:40:12.208Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-10T18:40:12.212Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T18:40:12.218Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-10T18:40:12.218Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T18:40:12.220Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T18:40:12.220Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:44:33 up 9 min,  0 users,  load average: 0.10, 0.31, 0.20
	Linux running-upgrade-704000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [6bcc3ab67cb5] <==
	I1010 18:40:13.638602       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1010 18:40:13.690328       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:40:13.690346       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1010 18:40:13.690330       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1010 18:40:13.692430       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1010 18:40:13.692670       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1010 18:40:13.692956       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1010 18:40:14.430030       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1010 18:40:14.595481       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1010 18:40:14.609528       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1010 18:40:14.609552       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:40:14.738195       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:40:14.747464       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:40:14.839876       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1010 18:40:14.841681       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1010 18:40:14.842041       1 controller.go:611] quota admission added evaluator for: endpoints
	I1010 18:40:14.843509       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:40:15.730020       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1010 18:40:16.351769       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1010 18:40:16.355212       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1010 18:40:16.360829       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1010 18:40:16.406845       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:40:28.834942       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1010 18:40:29.233740       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1010 18:40:29.737525       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [22fbb5338666] <==
	I1010 18:40:28.481658       1 shared_informer.go:262] Caches are synced for disruption
	I1010 18:40:28.481662       1 disruption.go:371] Sending events to api server.
	I1010 18:40:28.483353       1 shared_informer.go:262] Caches are synced for daemon sets
	I1010 18:40:28.483368       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1010 18:40:28.484587       1 shared_informer.go:262] Caches are synced for TTL
	I1010 18:40:28.493751       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1010 18:40:28.515609       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1010 18:40:28.516756       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1010 18:40:28.516786       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1010 18:40:28.516790       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1010 18:40:28.532169       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1010 18:40:28.614587       1 shared_informer.go:262] Caches are synced for crt configmap
	I1010 18:40:28.619880       1 shared_informer.go:262] Caches are synced for attach detach
	I1010 18:40:28.633133       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1010 18:40:28.644095       1 shared_informer.go:262] Caches are synced for resource quota
	I1010 18:40:28.684333       1 shared_informer.go:262] Caches are synced for resource quota
	I1010 18:40:28.705785       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1010 18:40:28.732364       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1010 18:40:28.836207       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1010 18:40:29.094183       1 shared_informer.go:262] Caches are synced for garbage collector
	I1010 18:40:29.181988       1 shared_informer.go:262] Caches are synced for garbage collector
	I1010 18:40:29.182004       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1010 18:40:29.238238       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lw7cv"
	I1010 18:40:29.589984       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-94bfj"
	I1010 18:40:29.593489       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2k6qr"
	
	
	==> kube-proxy [d18224ea6afb] <==
	I1010 18:40:29.724042       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1010 18:40:29.724085       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1010 18:40:29.724103       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1010 18:40:29.735258       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1010 18:40:29.735271       1 server_others.go:206] "Using iptables Proxier"
	I1010 18:40:29.735302       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1010 18:40:29.735440       1 server.go:661] "Version info" version="v1.24.1"
	I1010 18:40:29.735444       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:40:29.735844       1 config.go:444] "Starting node config controller"
	I1010 18:40:29.735863       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1010 18:40:29.735877       1 config.go:317] "Starting service config controller"
	I1010 18:40:29.735879       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1010 18:40:29.735884       1 config.go:226] "Starting endpoint slice config controller"
	I1010 18:40:29.735886       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1010 18:40:29.836568       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1010 18:40:29.836605       1 shared_informer.go:262] Caches are synced for node config
	I1010 18:40:29.836572       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [b8b2862ec0bc] <==
	W1010 18:40:13.638177       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1010 18:40:13.638190       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1010 18:40:13.638207       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1010 18:40:13.638211       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1010 18:40:13.638227       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1010 18:40:13.638230       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1010 18:40:13.638246       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1010 18:40:13.638252       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1010 18:40:13.638269       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 18:40:13.638276       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1010 18:40:13.638292       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 18:40:13.638296       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1010 18:40:13.638306       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1010 18:40:13.638309       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1010 18:40:13.638319       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 18:40:13.638322       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1010 18:40:14.471787       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1010 18:40:14.471834       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1010 18:40:14.536699       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1010 18:40:14.536721       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1010 18:40:14.602664       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 18:40:14.602688       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1010 18:40:14.641639       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1010 18:40:14.641738       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1010 18:40:15.035679       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-10-10 18:35:14 UTC, ends at Thu 2024-10-10 18:44:34 UTC. --
	Oct 10 18:40:18 running-upgrade-704000 kubelet[12741]: E1010 18:40:18.388558   12741 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-704000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-704000"
	Oct 10 18:40:28 running-upgrade-704000 kubelet[12741]: I1010 18:40:28.402677   12741 topology_manager.go:200] "Topology Admit Handler"
	Oct 10 18:40:28 running-upgrade-704000 kubelet[12741]: I1010 18:40:28.405051   12741 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 10 18:40:28 running-upgrade-704000 kubelet[12741]: I1010 18:40:28.405448   12741 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 10 18:40:28 running-upgrade-704000 kubelet[12741]: I1010 18:40:28.505768   12741 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d9e9887b-0808-4f96-a1e2-8b1830128a58-tmp\") pod \"storage-provisioner\" (UID: \"d9e9887b-0808-4f96-a1e2-8b1830128a58\") " pod="kube-system/storage-provisioner"
	Oct 10 18:40:28 running-upgrade-704000 kubelet[12741]: I1010 18:40:28.505790   12741 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpkfk\" (UniqueName: \"kubernetes.io/projected/d9e9887b-0808-4f96-a1e2-8b1830128a58-kube-api-access-dpkfk\") pod \"storage-provisioner\" (UID: \"d9e9887b-0808-4f96-a1e2-8b1830128a58\") " pod="kube-system/storage-provisioner"
	Oct 10 18:40:28 running-upgrade-704000 kubelet[12741]: E1010 18:40:28.609458   12741 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 10 18:40:28 running-upgrade-704000 kubelet[12741]: E1010 18:40:28.609481   12741 projected.go:192] Error preparing data for projected volume kube-api-access-dpkfk for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 10 18:40:28 running-upgrade-704000 kubelet[12741]: E1010 18:40:28.609514   12741 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d9e9887b-0808-4f96-a1e2-8b1830128a58-kube-api-access-dpkfk podName:d9e9887b-0808-4f96-a1e2-8b1830128a58 nodeName:}" failed. No retries permitted until 2024-10-10 18:40:29.109503405 +0000 UTC m=+12.767157170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dpkfk" (UniqueName: "kubernetes.io/projected/d9e9887b-0808-4f96-a1e2-8b1830128a58-kube-api-access-dpkfk") pod "storage-provisioner" (UID: "d9e9887b-0808-4f96-a1e2-8b1830128a58") : configmap "kube-root-ca.crt" not found
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: E1010 18:40:29.209033   12741 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: E1010 18:40:29.209056   12741 projected.go:192] Error preparing data for projected volume kube-api-access-dpkfk for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: E1010 18:40:29.209087   12741 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d9e9887b-0808-4f96-a1e2-8b1830128a58-kube-api-access-dpkfk podName:d9e9887b-0808-4f96-a1e2-8b1830128a58 nodeName:}" failed. No retries permitted until 2024-10-10 18:40:30.209076823 +0000 UTC m=+13.866730546 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dpkfk" (UniqueName: "kubernetes.io/projected/d9e9887b-0808-4f96-a1e2-8b1830128a58-kube-api-access-dpkfk") pod "storage-provisioner" (UID: "d9e9887b-0808-4f96-a1e2-8b1830128a58") : configmap "kube-root-ca.crt" not found
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.242107   12741 topology_manager.go:200] "Topology Admit Handler"
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.410828   12741 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66337bbb-d105-4b10-ad00-9685a41ffe0a-kube-proxy\") pod \"kube-proxy-lw7cv\" (UID: \"66337bbb-d105-4b10-ad00-9685a41ffe0a\") " pod="kube-system/kube-proxy-lw7cv"
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.410910   12741 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psr56\" (UniqueName: \"kubernetes.io/projected/66337bbb-d105-4b10-ad00-9685a41ffe0a-kube-api-access-psr56\") pod \"kube-proxy-lw7cv\" (UID: \"66337bbb-d105-4b10-ad00-9685a41ffe0a\") " pod="kube-system/kube-proxy-lw7cv"
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.410928   12741 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66337bbb-d105-4b10-ad00-9685a41ffe0a-xtables-lock\") pod \"kube-proxy-lw7cv\" (UID: \"66337bbb-d105-4b10-ad00-9685a41ffe0a\") " pod="kube-system/kube-proxy-lw7cv"
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.410937   12741 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66337bbb-d105-4b10-ad00-9685a41ffe0a-lib-modules\") pod \"kube-proxy-lw7cv\" (UID: \"66337bbb-d105-4b10-ad00-9685a41ffe0a\") " pod="kube-system/kube-proxy-lw7cv"
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.596185   12741 topology_manager.go:200] "Topology Admit Handler"
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.605318   12741 topology_manager.go:200] "Topology Admit Handler"
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.611349   12741 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hzcp\" (UniqueName: \"kubernetes.io/projected/dcd258e0-ae05-42e0-b6ce-ce6f0b8e2520-kube-api-access-6hzcp\") pod \"coredns-6d4b75cb6d-94bfj\" (UID: \"dcd258e0-ae05-42e0-b6ce-ce6f0b8e2520\") " pod="kube-system/coredns-6d4b75cb6d-94bfj"
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.611372   12741 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db21c71e-ba47-4ce4-be0d-8eb353a224f4-config-volume\") pod \"coredns-6d4b75cb6d-2k6qr\" (UID: \"db21c71e-ba47-4ce4-be0d-8eb353a224f4\") " pod="kube-system/coredns-6d4b75cb6d-2k6qr"
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.611382   12741 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcd258e0-ae05-42e0-b6ce-ce6f0b8e2520-config-volume\") pod \"coredns-6d4b75cb6d-94bfj\" (UID: \"dcd258e0-ae05-42e0-b6ce-ce6f0b8e2520\") " pod="kube-system/coredns-6d4b75cb6d-94bfj"
	Oct 10 18:40:29 running-upgrade-704000 kubelet[12741]: I1010 18:40:29.611393   12741 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8nt\" (UniqueName: \"kubernetes.io/projected/db21c71e-ba47-4ce4-be0d-8eb353a224f4-kube-api-access-lr8nt\") pod \"coredns-6d4b75cb6d-2k6qr\" (UID: \"db21c71e-ba47-4ce4-be0d-8eb353a224f4\") " pod="kube-system/coredns-6d4b75cb6d-2k6qr"
	Oct 10 18:44:17 running-upgrade-704000 kubelet[12741]: I1010 18:44:17.908229   12741 scope.go:110] "RemoveContainer" containerID="28cfc4235f988858c286abfb9ec9d119ff1affdfeb2474a2fddb4d9195a25636"
	Oct 10 18:44:17 running-upgrade-704000 kubelet[12741]: I1010 18:44:17.921462   12741 scope.go:110] "RemoveContainer" containerID="f111889abf6e8dc1fb3afe9b26f69729c6204c8c4a3cb47131b5316ef6e955ba"
	
	
	==> storage-provisioner [ac4a6ae47f3d] <==
	I1010 18:40:30.769173       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:40:30.780709       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:40:30.780986       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 18:40:30.788130       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:40:30.789445       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa3308fd-0193-4ff6-9510-6a82b66ae166", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-704000_76c98d18-af56-4275-91d2-66d8baeb5c8d became leader
	I1010 18:40:30.791258       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-704000_76c98d18-af56-4275-91d2-66d8baeb5c8d!
	I1010 18:40:30.893887       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-704000_76c98d18-af56-4275-91d2-66d8baeb5c8d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-704000 -n running-upgrade-704000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-704000 -n running-upgrade-704000: exit status 2 (15.640413875s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-704000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-704000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-704000
--- FAIL: TestRunningBinaryUpgrade (606.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-587000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-587000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.871771625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-587000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-587000" primary control-plane node in "kubernetes-upgrade-587000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-587000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:37:43.407961   13148 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:37:43.408116   13148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:37:43.408120   13148 out.go:358] Setting ErrFile to fd 2...
	I1010 11:37:43.408122   13148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:37:43.408256   13148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:37:43.409530   13148 out.go:352] Setting JSON to false
	I1010 11:37:43.428107   13148 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7634,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:37:43.428174   13148 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:37:43.432430   13148 out.go:177] * [kubernetes-upgrade-587000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:37:43.440668   13148 notify.go:220] Checking for updates...
	I1010 11:37:43.443618   13148 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:37:43.451629   13148 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:37:43.459587   13148 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:37:43.466572   13148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:37:43.474645   13148 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:37:43.482711   13148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:37:43.484600   13148 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:37:43.484690   13148 config.go:182] Loaded profile config "running-upgrade-704000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:37:43.484743   13148 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:37:43.487602   13148 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:37:43.494457   13148 start.go:297] selected driver: qemu2
	I1010 11:37:43.494465   13148 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:37:43.494471   13148 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:37:43.497097   13148 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:37:43.500640   13148 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:37:43.504731   13148 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 11:37:43.504750   13148 cni.go:84] Creating CNI manager for ""
	I1010 11:37:43.504787   13148 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1010 11:37:43.504819   13148 start.go:340] cluster config:
	{Name:kubernetes-upgrade-587000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:37:43.509736   13148 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:37:43.516595   13148 out.go:177] * Starting "kubernetes-upgrade-587000" primary control-plane node in "kubernetes-upgrade-587000" cluster
	I1010 11:37:43.520624   13148 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1010 11:37:43.520640   13148 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1010 11:37:43.520652   13148 cache.go:56] Caching tarball of preloaded images
	I1010 11:37:43.520753   13148 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:37:43.520759   13148 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1010 11:37:43.520834   13148 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/kubernetes-upgrade-587000/config.json ...
	I1010 11:37:43.520847   13148 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/kubernetes-upgrade-587000/config.json: {Name:mkaed9c033a70fb6842cdc76dd44d5ba6d5a3fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:37:43.521117   13148 start.go:360] acquireMachinesLock for kubernetes-upgrade-587000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:37:43.521174   13148 start.go:364] duration metric: took 50.042µs to acquireMachinesLock for "kubernetes-upgrade-587000"
	I1010 11:37:43.521189   13148 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:37:43.521217   13148 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:37:43.525619   13148 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:37:43.552934   13148 start.go:159] libmachine.API.Create for "kubernetes-upgrade-587000" (driver="qemu2")
	I1010 11:37:43.552962   13148 client.go:168] LocalClient.Create starting
	I1010 11:37:43.553036   13148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:37:43.553081   13148 main.go:141] libmachine: Decoding PEM data...
	I1010 11:37:43.553092   13148 main.go:141] libmachine: Parsing certificate...
	I1010 11:37:43.553128   13148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:37:43.553170   13148 main.go:141] libmachine: Decoding PEM data...
	I1010 11:37:43.553180   13148 main.go:141] libmachine: Parsing certificate...
	I1010 11:37:43.553520   13148 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:37:43.728451   13148 main.go:141] libmachine: Creating SSH key...
	I1010 11:37:43.780663   13148 main.go:141] libmachine: Creating Disk image...
	I1010 11:37:43.780679   13148 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:37:43.780879   13148 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2
	I1010 11:37:43.810194   13148 main.go:141] libmachine: STDOUT: 
	I1010 11:37:43.810213   13148 main.go:141] libmachine: STDERR: 
	I1010 11:37:43.810275   13148 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2 +20000M
	I1010 11:37:43.819473   13148 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:37:43.819489   13148 main.go:141] libmachine: STDERR: 
	I1010 11:37:43.819507   13148 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2
	I1010 11:37:43.819515   13148 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:37:43.819530   13148 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:37:43.819557   13148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:fc:1e:8c:97:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2
	I1010 11:37:43.821453   13148 main.go:141] libmachine: STDOUT: 
	I1010 11:37:43.821467   13148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:37:43.821489   13148 client.go:171] duration metric: took 268.5235ms to LocalClient.Create
	I1010 11:37:45.823595   13148 start.go:128] duration metric: took 2.302381208s to createHost
	I1010 11:37:45.823639   13148 start.go:83] releasing machines lock for "kubernetes-upgrade-587000", held for 2.302480375s
	W1010 11:37:45.823673   13148 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:37:45.833555   13148 out.go:177] * Deleting "kubernetes-upgrade-587000" in qemu2 ...
	W1010 11:37:45.851321   13148 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:37:45.851335   13148 start.go:729] Will try again in 5 seconds ...
	I1010 11:37:50.853504   13148 start.go:360] acquireMachinesLock for kubernetes-upgrade-587000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:37:50.853816   13148 start.go:364] duration metric: took 253.416µs to acquireMachinesLock for "kubernetes-upgrade-587000"
	I1010 11:37:50.853888   13148 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:37:50.854032   13148 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:37:50.862418   13148 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:37:50.893314   13148 start.go:159] libmachine.API.Create for "kubernetes-upgrade-587000" (driver="qemu2")
	I1010 11:37:50.893355   13148 client.go:168] LocalClient.Create starting
	I1010 11:37:50.893480   13148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:37:50.893550   13148 main.go:141] libmachine: Decoding PEM data...
	I1010 11:37:50.893565   13148 main.go:141] libmachine: Parsing certificate...
	I1010 11:37:50.893613   13148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:37:50.893659   13148 main.go:141] libmachine: Decoding PEM data...
	I1010 11:37:50.893671   13148 main.go:141] libmachine: Parsing certificate...
	I1010 11:37:50.894210   13148 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:37:51.055787   13148 main.go:141] libmachine: Creating SSH key...
	I1010 11:37:51.185606   13148 main.go:141] libmachine: Creating Disk image...
	I1010 11:37:51.185614   13148 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:37:51.185821   13148 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2
	I1010 11:37:51.195534   13148 main.go:141] libmachine: STDOUT: 
	I1010 11:37:51.195556   13148 main.go:141] libmachine: STDERR: 
	I1010 11:37:51.195605   13148 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2 +20000M
	I1010 11:37:51.204109   13148 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:37:51.204123   13148 main.go:141] libmachine: STDERR: 
	I1010 11:37:51.204133   13148 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2
	I1010 11:37:51.204137   13148 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:37:51.204148   13148 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:37:51.204171   13148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:bd:9f:3c:c7:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2
	I1010 11:37:51.206024   13148 main.go:141] libmachine: STDOUT: 
	I1010 11:37:51.206050   13148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:37:51.206062   13148 client.go:171] duration metric: took 312.705042ms to LocalClient.Create
	I1010 11:37:53.208314   13148 start.go:128] duration metric: took 2.354186459s to createHost
	I1010 11:37:53.208389   13148 start.go:83] releasing machines lock for "kubernetes-upgrade-587000", held for 2.354580542s
	W1010 11:37:53.208733   13148 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:37:53.218325   13148 out.go:201] 
	W1010 11:37:53.222341   13148 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:37:53.222368   13148 out.go:270] * 
	* 
	W1010 11:37:53.224698   13148 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:37:53.235346   13148 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-587000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-587000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-587000: (3.573005958s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-587000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-587000 status --format={{.Host}}: exit status 7 (65.351041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-587000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-587000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.199037s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-587000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-587000" primary control-plane node in "kubernetes-upgrade-587000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:37:56.922900   13185 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:37:56.923054   13185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:37:56.923058   13185 out.go:358] Setting ErrFile to fd 2...
	I1010 11:37:56.923060   13185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:37:56.923181   13185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:37:56.924275   13185 out.go:352] Setting JSON to false
	I1010 11:37:56.942842   13185 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7647,"bootTime":1728577829,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:37:56.942922   13185 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:37:56.948564   13185 out.go:177] * [kubernetes-upgrade-587000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:37:56.955472   13185 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:37:56.955533   13185 notify.go:220] Checking for updates...
	I1010 11:37:56.963520   13185 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:37:56.966544   13185 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:37:56.969573   13185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:37:56.972534   13185 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:37:56.975503   13185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:37:56.978802   13185 config.go:182] Loaded profile config "kubernetes-upgrade-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1010 11:37:56.979103   13185 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:37:56.983563   13185 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:37:56.990522   13185 start.go:297] selected driver: qemu2
	I1010 11:37:56.990528   13185 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:37:56.990570   13185 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:37:56.993314   13185 cni.go:84] Creating CNI manager for ""
	I1010 11:37:56.993348   13185 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:37:56.993376   13185 start.go:340] cluster config:
	{Name:kubernetes-upgrade-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-587000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:37:56.997717   13185 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:37:57.005477   13185 out.go:177] * Starting "kubernetes-upgrade-587000" primary control-plane node in "kubernetes-upgrade-587000" cluster
	I1010 11:37:57.009537   13185 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:37:57.009556   13185 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:37:57.009562   13185 cache.go:56] Caching tarball of preloaded images
	I1010 11:37:57.009636   13185 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:37:57.009642   13185 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:37:57.009688   13185 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/kubernetes-upgrade-587000/config.json ...
	I1010 11:37:57.010194   13185 start.go:360] acquireMachinesLock for kubernetes-upgrade-587000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:37:57.010227   13185 start.go:364] duration metric: took 26.542µs to acquireMachinesLock for "kubernetes-upgrade-587000"
	I1010 11:37:57.010237   13185 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:37:57.010244   13185 fix.go:54] fixHost starting: 
	I1010 11:37:57.010368   13185 fix.go:112] recreateIfNeeded on kubernetes-upgrade-587000: state=Stopped err=<nil>
	W1010 11:37:57.010375   13185 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:37:57.017531   13185 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-587000" ...
	I1010 11:37:57.021532   13185 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:37:57.021570   13185 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:bd:9f:3c:c7:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2
	I1010 11:37:57.023824   13185 main.go:141] libmachine: STDOUT: 
	I1010 11:37:57.023842   13185 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:37:57.023870   13185 fix.go:56] duration metric: took 13.626667ms for fixHost
	I1010 11:37:57.023875   13185 start.go:83] releasing machines lock for "kubernetes-upgrade-587000", held for 13.643125ms
	W1010 11:37:57.023882   13185 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:37:57.023927   13185 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:37:57.023931   13185 start.go:729] Will try again in 5 seconds ...
	I1010 11:38:02.026186   13185 start.go:360] acquireMachinesLock for kubernetes-upgrade-587000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:38:02.026602   13185 start.go:364] duration metric: took 307.792µs to acquireMachinesLock for "kubernetes-upgrade-587000"
	I1010 11:38:02.026731   13185 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:38:02.026751   13185 fix.go:54] fixHost starting: 
	I1010 11:38:02.027542   13185 fix.go:112] recreateIfNeeded on kubernetes-upgrade-587000: state=Stopped err=<nil>
	W1010 11:38:02.027572   13185 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:38:02.036905   13185 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-587000" ...
	I1010 11:38:02.040936   13185 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:38:02.041182   13185 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:bd:9f:3c:c7:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubernetes-upgrade-587000/disk.qcow2
	I1010 11:38:02.050504   13185 main.go:141] libmachine: STDOUT: 
	I1010 11:38:02.050569   13185 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:38:02.050651   13185 fix.go:56] duration metric: took 23.902541ms for fixHost
	I1010 11:38:02.050666   13185 start.go:83] releasing machines lock for "kubernetes-upgrade-587000", held for 24.041875ms
	W1010 11:38:02.050893   13185 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:38:02.059905   13185 out.go:201] 
	W1010 11:38:02.063998   13185 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:38:02.064038   13185 out.go:270] * 
	* 
	W1010 11:38:02.066239   13185 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:38:02.075783   13185 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-587000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-587000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-587000 version --output=json: exit status 1 (57.018458ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-587000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-10 11:38:02.147019 -0700 PDT m=+940.950322793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-587000 -n kubernetes-upgrade-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-587000 -n kubernetes-upgrade-587000: exit status 7 (35.295625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-587000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-587000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-587000
--- FAIL: TestKubernetesUpgrade (18.88s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.98s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19787
- KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3039677373/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.98s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.12s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19787
- KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2434112169/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (576.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2899538344 start -p stopped-upgrade-616000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2899538344 start -p stopped-upgrade-616000 --memory=2200 --vm-driver=qemu2 : (41.955038375s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2899538344 -p stopped-upgrade-616000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2899538344 -p stopped-upgrade-616000 stop: (12.095041875s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-616000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-616000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.47798125s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-616000" primary control-plane node in "stopped-upgrade-616000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-616000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:38:57.583243   13221 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:38:57.583401   13221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:38:57.583405   13221 out.go:358] Setting ErrFile to fd 2...
	I1010 11:38:57.583408   13221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:38:57.583547   13221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:38:57.584785   13221 out.go:352] Setting JSON to false
	I1010 11:38:57.604065   13221 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7708,"bootTime":1728577829,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:38:57.604148   13221 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:38:57.607810   13221 out.go:177] * [stopped-upgrade-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:38:57.615791   13221 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:38:57.615841   13221 notify.go:220] Checking for updates...
	I1010 11:38:57.622733   13221 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:38:57.625752   13221 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:38:57.627095   13221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:38:57.629700   13221 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:38:57.632755   13221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:38:57.636055   13221 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:38:57.639739   13221 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1010 11:38:57.642732   13221 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:38:57.646719   13221 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:38:57.653727   13221 start.go:297] selected driver: qemu2
	I1010 11:38:57.653733   13221 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53577 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1010 11:38:57.653782   13221 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:38:57.656649   13221 cni.go:84] Creating CNI manager for ""
	I1010 11:38:57.656685   13221 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:38:57.656714   13221 start.go:340] cluster config:
	{Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53577 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1010 11:38:57.656764   13221 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:38:57.660721   13221 out.go:177] * Starting "stopped-upgrade-616000" primary control-plane node in "stopped-upgrade-616000" cluster
	I1010 11:38:57.668764   13221 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1010 11:38:57.668787   13221 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1010 11:38:57.668797   13221 cache.go:56] Caching tarball of preloaded images
	I1010 11:38:57.668887   13221 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:38:57.668894   13221 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1010 11:38:57.668945   13221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/config.json ...
	I1010 11:38:57.669494   13221 start.go:360] acquireMachinesLock for stopped-upgrade-616000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:38:57.669554   13221 start.go:364] duration metric: took 51.417µs to acquireMachinesLock for "stopped-upgrade-616000"
	I1010 11:38:57.669565   13221 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:38:57.669571   13221 fix.go:54] fixHost starting: 
	I1010 11:38:57.669701   13221 fix.go:112] recreateIfNeeded on stopped-upgrade-616000: state=Stopped err=<nil>
	W1010 11:38:57.669709   13221 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:38:57.673723   13221 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-616000" ...
	I1010 11:38:57.681755   13221 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:38:57.681836   13221 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53542-:22,hostfwd=tcp::53543-:2376,hostname=stopped-upgrade-616000 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/disk.qcow2
	I1010 11:38:57.734641   13221 main.go:141] libmachine: STDOUT: 
	I1010 11:38:57.734668   13221 main.go:141] libmachine: STDERR: 
	I1010 11:38:57.734674   13221 main.go:141] libmachine: Waiting for VM to start (ssh -p 53542 docker@127.0.0.1)...
	I1010 11:39:17.990351   13221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/config.json ...
	I1010 11:39:17.991423   13221 machine.go:93] provisionDockerMachine start ...
	I1010 11:39:17.991703   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:17.992257   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:17.992279   13221 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 11:39:18.076383   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 11:39:18.076413   13221 buildroot.go:166] provisioning hostname "stopped-upgrade-616000"
	I1010 11:39:18.076530   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.076718   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.076729   13221 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-616000 && echo "stopped-upgrade-616000" | sudo tee /etc/hostname
	I1010 11:39:18.151610   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-616000
	
	I1010 11:39:18.151705   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.151860   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.151872   13221 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-616000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-616000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-616000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 11:39:18.223182   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 11:39:18.223197   13221 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19787-10623/.minikube CaCertPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19787-10623/.minikube}
	I1010 11:39:18.223211   13221 buildroot.go:174] setting up certificates
	I1010 11:39:18.223217   13221 provision.go:84] configureAuth start
	I1010 11:39:18.223224   13221 provision.go:143] copyHostCerts
	I1010 11:39:18.223294   13221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.pem, removing ...
	I1010 11:39:18.223303   13221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.pem
	I1010 11:39:18.223426   13221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.pem (1082 bytes)
	I1010 11:39:18.223659   13221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19787-10623/.minikube/cert.pem, removing ...
	I1010 11:39:18.223664   13221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19787-10623/.minikube/cert.pem
	I1010 11:39:18.223718   13221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19787-10623/.minikube/cert.pem (1123 bytes)
	I1010 11:39:18.223853   13221 exec_runner.go:144] found /Users/jenkins/minikube-integration/19787-10623/.minikube/key.pem, removing ...
	I1010 11:39:18.223857   13221 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19787-10623/.minikube/key.pem
	I1010 11:39:18.223907   13221 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19787-10623/.minikube/key.pem (1675 bytes)
	I1010 11:39:18.224033   13221 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-616000 san=[127.0.0.1 localhost minikube stopped-upgrade-616000]
	I1010 11:39:18.260368   13221 provision.go:177] copyRemoteCerts
	I1010 11:39:18.260408   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 11:39:18.260415   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1010 11:39:18.294261   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 11:39:18.301595   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 11:39:18.308526   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 11:39:18.315753   13221 provision.go:87] duration metric: took 92.52575ms to configureAuth
	I1010 11:39:18.315763   13221 buildroot.go:189] setting minikube options for container-runtime
	I1010 11:39:18.315871   13221 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:39:18.315911   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.316001   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.316006   13221 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1010 11:39:18.379456   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1010 11:39:18.379466   13221 buildroot.go:70] root file system type: tmpfs
	I1010 11:39:18.379517   13221 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1010 11:39:18.379579   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.379687   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.379721   13221 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1010 11:39:18.446912   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1010 11:39:18.446976   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.447081   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.447089   13221 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1010 11:39:18.819051   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1010 11:39:18.819065   13221 machine.go:96] duration metric: took 827.631583ms to provisionDockerMachine
	I1010 11:39:18.819073   13221 start.go:293] postStartSetup for "stopped-upgrade-616000" (driver="qemu2")
	I1010 11:39:18.819079   13221 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 11:39:18.819152   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 11:39:18.819162   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1010 11:39:18.853684   13221 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 11:39:18.855041   13221 info.go:137] Remote host: Buildroot 2021.02.12
	I1010 11:39:18.855048   13221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19787-10623/.minikube/addons for local assets ...
	I1010 11:39:18.855117   13221 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19787-10623/.minikube/files for local assets ...
	I1010 11:39:18.855200   13221 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem -> 111352.pem in /etc/ssl/certs
	I1010 11:39:18.855297   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 11:39:18.858285   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem --> /etc/ssl/certs/111352.pem (1708 bytes)
	I1010 11:39:18.865591   13221 start.go:296] duration metric: took 46.512917ms for postStartSetup
	I1010 11:39:18.865607   13221 fix.go:56] duration metric: took 21.196246292s for fixHost
	I1010 11:39:18.865657   13221 main.go:141] libmachine: Using SSH client type: native
	I1010 11:39:18.865756   13221 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008ca480] 0x1008cccc0 <nil>  [] 0s} localhost 53542 <nil> <nil>}
	I1010 11:39:18.865760   13221 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 11:39:18.927552   13221 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728585559.360355129
	
	I1010 11:39:18.927560   13221 fix.go:216] guest clock: 1728585559.360355129
	I1010 11:39:18.927564   13221 fix.go:229] Guest: 2024-10-10 11:39:19.360355129 -0700 PDT Remote: 2024-10-10 11:39:18.865609 -0700 PDT m=+21.304767251 (delta=494.746129ms)
	I1010 11:39:18.927575   13221 fix.go:200] guest clock delta is within tolerance: 494.746129ms
	I1010 11:39:18.927579   13221 start.go:83] releasing machines lock for "stopped-upgrade-616000", held for 21.258228875s
	I1010 11:39:18.927641   13221 ssh_runner.go:195] Run: cat /version.json
	I1010 11:39:18.927649   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1010 11:39:18.927671   13221 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 11:39:18.927688   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	W1010 11:39:18.928230   13221 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53542: connect: connection refused
	I1010 11:39:18.928256   13221 retry.go:31] will retry after 269.51375ms: dial tcp [::1]:53542: connect: connection refused
	W1010 11:39:19.236594   13221 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1010 11:39:19.236697   13221 ssh_runner.go:195] Run: systemctl --version
	I1010 11:39:19.239144   13221 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 11:39:19.241567   13221 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 11:39:19.241623   13221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1010 11:39:19.245381   13221 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1010 11:39:19.251092   13221 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 11:39:19.251113   13221 start.go:495] detecting cgroup driver to use...
	I1010 11:39:19.251202   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 11:39:19.259149   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1010 11:39:19.262713   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1010 11:39:19.266277   13221 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1010 11:39:19.266312   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1010 11:39:19.269618   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1010 11:39:19.272597   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1010 11:39:19.275460   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1010 11:39:19.278784   13221 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 11:39:19.282224   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1010 11:39:19.285714   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1010 11:39:19.288898   13221 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1010 11:39:19.291767   13221 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 11:39:19.294830   13221 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 11:39:19.298110   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:19.381918   13221 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1010 11:39:19.388147   13221 start.go:495] detecting cgroup driver to use...
	I1010 11:39:19.388215   13221 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1010 11:39:19.393055   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 11:39:19.398212   13221 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 11:39:19.407013   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 11:39:19.412399   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1010 11:39:19.416968   13221 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1010 11:39:19.474294   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1010 11:39:19.479695   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 11:39:19.485299   13221 ssh_runner.go:195] Run: which cri-dockerd
	I1010 11:39:19.486675   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1010 11:39:19.489748   13221 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1010 11:39:19.495063   13221 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1010 11:39:19.572794   13221 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1010 11:39:19.655670   13221 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1010 11:39:19.655740   13221 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1010 11:39:19.660886   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:19.737781   13221 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1010 11:39:20.884174   13221 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.146385542s)
	I1010 11:39:20.884261   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1010 11:39:20.888887   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1010 11:39:20.893099   13221 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1010 11:39:20.976569   13221 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1010 11:39:21.063632   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:21.143682   13221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1010 11:39:21.149548   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1010 11:39:21.153917   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:21.242814   13221 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1010 11:39:21.280991   13221 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1010 11:39:21.281095   13221 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1010 11:39:21.283047   13221 start.go:563] Will wait 60s for crictl version
	I1010 11:39:21.283110   13221 ssh_runner.go:195] Run: which crictl
	I1010 11:39:21.284828   13221 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 11:39:21.300111   13221 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1010 11:39:21.300208   13221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1010 11:39:21.317765   13221 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1010 11:39:21.338048   13221 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1010 11:39:21.338144   13221 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1010 11:39:21.339996   13221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 11:39:21.343693   13221 kubeadm.go:883] updating cluster {Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53577 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1010 11:39:21.343766   13221 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1010 11:39:21.343830   13221 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1010 11:39:21.355415   13221 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1010 11:39:21.355427   13221 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1010 11:39:21.355490   13221 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1010 11:39:21.359154   13221 ssh_runner.go:195] Run: which lz4
	I1010 11:39:21.360658   13221 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 11:39:21.361856   13221 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 11:39:21.361870   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1010 11:39:22.330981   13221 docker.go:649] duration metric: took 970.383125ms to copy over tarball
	I1010 11:39:22.331049   13221 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 11:39:23.518040   13221 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.18698675s)
	I1010 11:39:23.518057   13221 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 11:39:23.534360   13221 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1010 11:39:23.538154   13221 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1010 11:39:23.543735   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:23.621788   13221 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1010 11:39:25.352003   13221 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.730214834s)
	I1010 11:39:25.352121   13221 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1010 11:39:25.363622   13221 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1010 11:39:25.363630   13221 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1010 11:39:25.363636   13221 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 11:39:25.370211   13221 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:39:25.371530   13221 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:39:25.373578   13221 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:39:25.373884   13221 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:39:25.375699   13221 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:39:25.375727   13221 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:39:25.376995   13221 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:39:25.377098   13221 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:39:25.378334   13221 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:39:25.378835   13221 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:39:25.379622   13221 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1010 11:39:25.379903   13221 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:39:25.380712   13221 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:39:25.381028   13221 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:39:25.381498   13221 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1010 11:39:25.382588   13221 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:39:26.028122   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:39:26.039899   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:39:26.040984   13221 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1010 11:39:26.041010   13221 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:39:26.041043   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1010 11:39:26.051911   13221 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1010 11:39:26.051934   13221 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:39:26.052040   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1010 11:39:26.052821   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:39:26.065723   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1010 11:39:26.067050   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1010 11:39:26.068859   13221 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1010 11:39:26.068875   13221 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:39:26.068931   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1010 11:39:26.079410   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1010 11:39:26.093529   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1010 11:39:26.103337   13221 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1010 11:39:26.103364   13221 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1010 11:39:26.103420   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1010 11:39:26.113312   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1010 11:39:26.174449   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:39:26.186820   13221 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1010 11:39:26.186840   13221 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:39:26.186915   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1010 11:39:26.197566   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1010 11:39:26.211253   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1010 11:39:26.221903   13221 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1010 11:39:26.221923   13221 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1010 11:39:26.222003   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1010 11:39:26.236246   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1010 11:39:26.236400   13221 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1010 11:39:26.237970   13221 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1010 11:39:26.237978   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1010 11:39:26.246039   13221 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1010 11:39:26.246048   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1010 11:39:26.250279   13221 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1010 11:39:26.250419   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:39:26.284472   13221 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1010 11:39:26.284515   13221 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1010 11:39:26.284532   13221 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:39:26.284625   13221 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1010 11:39:26.296678   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1010 11:39:26.296816   13221 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1010 11:39:26.298270   13221 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1010 11:39:26.298280   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1010 11:39:26.339098   13221 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1010 11:39:26.339134   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1010 11:39:26.377732   13221 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1010 11:39:26.455416   13221 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1010 11:39:26.455567   13221 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:39:26.468908   13221 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1010 11:39:26.468939   13221 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:39:26.469005   13221 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:39:26.483816   13221 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 11:39:26.483951   13221 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 11:39:26.485250   13221 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1010 11:39:26.485263   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1010 11:39:26.514868   13221 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 11:39:26.514885   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1010 11:39:26.754270   13221 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 11:39:26.754316   13221 cache_images.go:92] duration metric: took 1.39068725s to LoadCachedImages
	W1010 11:39:26.754369   13221 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1010 11:39:26.754376   13221 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1010 11:39:26.754434   13221 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-616000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 11:39:26.754511   13221 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1010 11:39:26.768032   13221 cni.go:84] Creating CNI manager for ""
	I1010 11:39:26.768043   13221 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:39:26.768048   13221 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 11:39:26.768057   13221 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-616000 NodeName:stopped-upgrade-616000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 11:39:26.768127   13221 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-616000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 11:39:26.768197   13221 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1010 11:39:26.771037   13221 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 11:39:26.771074   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 11:39:26.774159   13221 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1010 11:39:26.779226   13221 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 11:39:26.784276   13221 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1010 11:39:26.789445   13221 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1010 11:39:26.790590   13221 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 11:39:26.794399   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:39:26.874098   13221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 11:39:26.880185   13221 certs.go:68] Setting up /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000 for IP: 10.0.2.15
	I1010 11:39:26.880195   13221 certs.go:194] generating shared ca certs ...
	I1010 11:39:26.880205   13221 certs.go:226] acquiring lock for ca certs: {Name:mk609fb55a881bb4c70c8ff17f366ce3ffd355cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:39:26.880372   13221 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.key
	I1010 11:39:26.880638   13221 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/proxy-client-ca.key
	I1010 11:39:26.880649   13221 certs.go:256] generating profile certs ...
	I1010 11:39:26.880879   13221 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/client.key
	I1010 11:39:26.880899   13221 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213
	I1010 11:39:26.880911   13221 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1010 11:39:26.982871   13221 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 ...
	I1010 11:39:26.982885   13221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213: {Name:mke4d2cca97cd85a4f67bb0f1cfbfeabfb6c5007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:39:26.983174   13221 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213 ...
	I1010 11:39:26.983179   13221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213: {Name:mk871611112a3a344c03cb5c05e3edc8ede37b7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:39:26.983328   13221 certs.go:381] copying /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt
	I1010 11:39:26.983442   13221 certs.go:385] copying /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key
	I1010 11:39:26.983785   13221 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/proxy-client.key
	I1010 11:39:26.983928   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/11135.pem (1338 bytes)
	W1010 11:39:26.984111   13221 certs.go:480] ignoring /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/11135_empty.pem, impossibly tiny 0 bytes
	I1010 11:39:26.984119   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 11:39:26.984148   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem (1082 bytes)
	I1010 11:39:26.984169   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem (1123 bytes)
	I1010 11:39:26.984189   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/key.pem (1675 bytes)
	I1010 11:39:26.984243   13221 certs.go:484] found cert: /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem (1708 bytes)
	I1010 11:39:26.984610   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 11:39:26.991383   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1010 11:39:26.998850   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 11:39:27.005720   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 11:39:27.012613   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 11:39:27.019395   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 11:39:27.026770   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 11:39:27.034458   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 11:39:27.042144   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 11:39:27.049534   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/11135.pem --> /usr/share/ca-certificates/11135.pem (1338 bytes)
	I1010 11:39:27.056750   13221 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/ssl/certs/111352.pem --> /usr/share/ca-certificates/111352.pem (1708 bytes)
	I1010 11:39:27.063762   13221 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 11:39:27.068745   13221 ssh_runner.go:195] Run: openssl version
	I1010 11:39:27.070654   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 11:39:27.074105   13221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 11:39:27.075680   13221 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 18:35 /usr/share/ca-certificates/minikubeCA.pem
	I1010 11:39:27.075713   13221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 11:39:27.077504   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 11:39:27.080283   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11135.pem && ln -fs /usr/share/ca-certificates/11135.pem /etc/ssl/certs/11135.pem"
	I1010 11:39:27.083139   13221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11135.pem
	I1010 11:39:27.084505   13221 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:23 /usr/share/ca-certificates/11135.pem
	I1010 11:39:27.084532   13221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11135.pem
	I1010 11:39:27.086241   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11135.pem /etc/ssl/certs/51391683.0"
	I1010 11:39:27.089650   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111352.pem && ln -fs /usr/share/ca-certificates/111352.pem /etc/ssl/certs/111352.pem"
	I1010 11:39:27.092631   13221 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111352.pem
	I1010 11:39:27.093991   13221 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:23 /usr/share/ca-certificates/111352.pem
	I1010 11:39:27.094025   13221 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111352.pem
	I1010 11:39:27.095805   13221 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111352.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 11:39:27.098955   13221 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 11:39:27.100778   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 11:39:27.103540   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 11:39:27.105643   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 11:39:27.107933   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 11:39:27.109723   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 11:39:27.111514   13221 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 11:39:27.113321   13221 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53577 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1010 11:39:27.113395   13221 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1010 11:39:27.123272   13221 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 11:39:27.127038   13221 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 11:39:27.127043   13221 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 11:39:27.127078   13221 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 11:39:27.130192   13221 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 11:39:27.130484   13221 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-616000" does not appear in /Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:39:27.130575   13221 kubeconfig.go:62] /Users/jenkins/minikube-integration/19787-10623/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-616000" cluster setting kubeconfig missing "stopped-upgrade-616000" context setting]
	I1010 11:39:27.130776   13221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/kubeconfig: {Name:mk76f18909e94718c05e51991d3ea4660849ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:39:27.131190   13221 kapi.go:59] client config for stopped-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102322a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 11:39:27.131681   13221 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 11:39:27.134479   13221 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-616000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1010 11:39:27.134487   13221 kubeadm.go:1160] stopping kube-system containers ...
	I1010 11:39:27.134536   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1010 11:39:27.144944   13221 docker.go:483] Stopping containers: [c10b1f623e8e 295994b875c3 8e4c05f7b12f d0634f9bbbf3 33e7c52c5d74 14ff5da1faec 92c530ce8e31 d7741e6115dd]
	I1010 11:39:27.145017   13221 ssh_runner.go:195] Run: docker stop c10b1f623e8e 295994b875c3 8e4c05f7b12f d0634f9bbbf3 33e7c52c5d74 14ff5da1faec 92c530ce8e31 d7741e6115dd
	I1010 11:39:27.155808   13221 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 11:39:27.161426   13221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 11:39:27.164185   13221 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 11:39:27.164191   13221 kubeadm.go:157] found existing configuration files:
	
	I1010 11:39:27.164222   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/admin.conf
	I1010 11:39:27.166763   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 11:39:27.166790   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 11:39:27.169851   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/kubelet.conf
	I1010 11:39:27.172449   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 11:39:27.172484   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 11:39:27.174930   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/controller-manager.conf
	I1010 11:39:27.177856   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 11:39:27.177883   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 11:39:27.180723   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/scheduler.conf
	I1010 11:39:27.183149   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 11:39:27.183178   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 11:39:27.186238   13221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 11:39:27.189187   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:39:27.210999   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:39:27.550230   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:39:27.680630   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:39:27.711153   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 11:39:27.744036   13221 api_server.go:52] waiting for apiserver process to appear ...
	I1010 11:39:27.744125   13221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:39:28.244956   13221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:39:28.746215   13221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:39:28.751713   13221 api_server.go:72] duration metric: took 1.007687875s to wait for apiserver process to appear ...
	I1010 11:39:28.751725   13221 api_server.go:88] waiting for apiserver healthz status ...
	I1010 11:39:28.751745   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:33.753762   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:33.753787   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:38.753970   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:38.754009   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:43.754266   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:43.754289   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:48.754678   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:48.754722   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:53.755321   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:53.755340   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:39:58.756040   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:39:58.756133   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:03.757266   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:03.757293   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:08.757712   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:08.757752   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:13.759105   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:13.759135   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:18.760778   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:18.760823   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:23.763089   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:23.763122   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:28.765410   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:28.765612   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:40:28.777714   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:40:28.777796   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:40:28.790764   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:40:28.790838   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:40:28.801729   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:40:28.801809   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:40:28.812634   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:40:28.812727   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:40:28.823259   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:40:28.823332   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:40:28.833390   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:40:28.833470   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:40:28.844025   13221 logs.go:282] 0 containers: []
	W1010 11:40:28.844037   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:40:28.844114   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:40:28.855668   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:40:28.855688   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:40:28.855701   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:40:28.871277   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:40:28.871289   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:40:28.882928   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:40:28.882939   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:40:28.894679   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:40:28.894689   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:40:28.919405   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:40:28.919422   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:40:28.933214   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:40:28.933224   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:40:28.947588   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:40:28.947599   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:40:28.959711   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:40:28.959722   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:40:29.067960   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:40:29.067974   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:40:29.082875   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:40:29.082887   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:40:29.124517   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:40:29.124530   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:40:29.141826   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:40:29.141836   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:40:29.160686   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:40:29.160707   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:40:29.199677   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:40:29.199691   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:40:29.204267   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:40:29.204274   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:40:29.215463   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:40:29.215472   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:40:29.226993   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:40:29.227004   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:40:31.743905   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:36.746079   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:36.746229   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:40:36.757788   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:40:36.757879   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:40:36.768623   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:40:36.768717   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:40:36.780232   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:40:36.780304   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:40:36.790788   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:40:36.790868   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:40:36.801070   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:40:36.801146   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:40:36.811327   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:40:36.811401   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:40:36.821250   13221 logs.go:282] 0 containers: []
	W1010 11:40:36.821263   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:40:36.821333   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:40:36.832193   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:40:36.832212   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:40:36.832217   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:40:36.843879   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:40:36.843890   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:40:36.869133   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:40:36.869141   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:40:36.881412   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:40:36.881423   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:40:36.895182   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:40:36.895193   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:40:36.907016   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:40:36.907029   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:40:36.920365   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:40:36.920377   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:40:36.955840   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:40:36.955851   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:40:36.969871   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:40:36.969886   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:40:36.983744   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:40:36.983758   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:40:37.001228   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:40:37.001237   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:40:37.040734   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:40:37.040744   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:40:37.044935   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:40:37.044943   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:40:37.056346   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:40:37.056361   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:40:37.067823   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:40:37.067842   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:40:37.106068   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:40:37.106078   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:40:37.117564   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:40:37.117574   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:40:39.634707   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:44.637065   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:44.637314   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:40:44.660548   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:40:44.660643   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:40:44.675894   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:40:44.675989   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:40:44.688246   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:40:44.688333   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:40:44.702887   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:40:44.702970   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:40:44.713551   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:40:44.713627   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:40:44.723995   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:40:44.724070   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:40:44.734449   13221 logs.go:282] 0 containers: []
	W1010 11:40:44.734461   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:40:44.734526   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:40:44.744945   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:40:44.744959   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:40:44.744964   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:40:44.785162   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:40:44.785174   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:40:44.802261   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:40:44.802274   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:40:44.816889   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:40:44.816898   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:40:44.831274   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:40:44.831287   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:40:44.856653   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:40:44.856664   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:40:44.894883   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:40:44.894895   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:40:44.909870   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:40:44.909884   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:40:44.921497   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:40:44.921508   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:40:44.932923   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:40:44.932933   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:40:44.969373   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:40:44.969384   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:40:44.981797   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:40:44.981810   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:40:44.986515   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:40:44.986520   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:40:44.998617   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:40:44.998628   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:40:45.014197   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:40:45.014211   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:40:45.031926   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:40:45.031935   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:40:45.044101   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:40:45.044115   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:40:47.556094   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:40:52.558316   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:40:52.558515   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:40:52.569611   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:40:52.569683   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:40:52.580404   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:40:52.580484   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:40:52.590731   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:40:52.590798   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:40:52.601394   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:40:52.601468   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:40:52.612213   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:40:52.612291   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:40:52.628281   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:40:52.628353   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:40:52.638546   13221 logs.go:282] 0 containers: []
	W1010 11:40:52.638560   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:40:52.638624   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:40:52.649203   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:40:52.649223   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:40:52.649228   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:40:52.665054   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:40:52.665068   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:40:52.676373   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:40:52.676384   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:40:52.715777   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:40:52.715787   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:40:52.728865   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:40:52.728877   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:40:52.747837   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:40:52.747846   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:40:52.759543   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:40:52.759552   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:40:52.783436   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:40:52.783449   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:40:52.787587   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:40:52.787593   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:40:52.801481   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:40:52.801490   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:40:52.815959   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:40:52.815970   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:40:52.841335   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:40:52.841344   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:40:52.855382   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:40:52.855395   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:40:52.893819   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:40:52.893829   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:40:52.908235   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:40:52.908245   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:40:52.919606   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:40:52.919618   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:40:52.931491   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:40:52.931500   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:40:55.470970   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:00.473469   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:00.473797   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:00.498296   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:00.498427   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:00.514549   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:00.514674   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:00.528382   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:00.528462   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:00.539798   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:00.539882   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:00.550776   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:00.550855   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:00.561270   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:00.561345   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:00.571494   13221 logs.go:282] 0 containers: []
	W1010 11:41:00.571504   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:00.571569   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:00.587987   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:00.588007   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:00.588012   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:00.599686   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:00.599695   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:00.611615   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:00.611629   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:00.623136   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:00.623146   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:00.638889   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:00.638904   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:00.674778   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:00.674788   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:00.697642   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:00.697652   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:00.708408   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:00.708418   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:00.722162   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:00.722175   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:00.740121   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:00.740130   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:00.766491   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:00.766507   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:00.770901   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:00.770906   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:00.808188   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:00.808203   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:00.822182   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:00.822191   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:00.837044   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:00.837058   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:00.852319   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:00.852329   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:00.864086   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:00.864096   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:03.403592   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:08.405864   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:08.406050   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:08.419168   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:08.419256   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:08.430852   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:08.430929   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:08.441386   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:08.441471   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:08.452183   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:08.452268   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:08.462437   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:08.462515   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:08.472826   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:08.472909   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:08.483537   13221 logs.go:282] 0 containers: []
	W1010 11:41:08.483550   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:08.483618   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:08.494117   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:08.494134   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:08.494140   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:08.533823   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:08.533838   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:08.571922   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:08.571933   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:08.585788   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:08.585801   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:08.609499   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:08.609507   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:08.624444   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:08.624454   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:08.637992   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:08.638002   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:08.649008   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:08.649021   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:08.666556   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:08.666567   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:08.678478   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:08.678493   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:08.682648   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:08.682654   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:08.696507   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:08.696518   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:08.711684   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:08.711694   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:08.723656   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:08.723669   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:08.735214   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:08.735226   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:08.753302   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:08.753312   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:08.790906   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:08.790918   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:11.304489   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:16.307124   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:16.307419   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:16.330843   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:16.330955   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:16.347219   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:16.347323   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:16.364984   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:16.365064   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:16.376943   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:16.377016   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:16.387544   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:16.387608   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:16.398780   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:16.398862   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:16.410048   13221 logs.go:282] 0 containers: []
	W1010 11:41:16.410059   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:16.410127   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:16.421292   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:16.421315   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:16.421321   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:16.434162   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:16.434174   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:16.478695   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:16.478705   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:16.490446   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:16.490458   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:16.513859   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:16.513868   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:16.563272   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:16.563286   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:16.577337   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:16.577347   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:16.588839   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:16.588849   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:16.603192   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:16.603204   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:16.614822   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:16.614832   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:16.633658   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:16.633668   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:16.645012   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:16.645024   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:16.686432   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:16.686458   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:16.691645   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:16.691657   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:16.707365   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:16.707377   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:16.730466   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:16.730480   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:16.745314   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:16.745325   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:19.260864   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:24.263247   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:24.263417   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:24.278257   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:24.278356   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:24.289982   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:24.290067   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:24.300961   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:24.301039   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:24.311473   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:24.311554   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:24.326021   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:24.326105   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:24.336379   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:24.336457   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:24.346660   13221 logs.go:282] 0 containers: []
	W1010 11:41:24.346670   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:24.346746   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:24.356965   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:24.356980   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:24.356985   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:24.373766   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:24.373775   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:24.388146   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:24.388156   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:24.399280   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:24.399289   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:24.416870   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:24.416883   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:24.428964   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:24.428980   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:24.440937   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:24.440949   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:24.480620   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:24.480630   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:24.494163   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:24.494177   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:24.508717   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:24.508727   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:24.520889   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:24.520901   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:24.533631   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:24.533643   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:24.538466   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:24.538476   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:24.587021   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:24.587029   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:24.629215   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:24.629228   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:24.642674   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:24.642689   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:24.664417   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:24.664426   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:27.191407   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:32.193650   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:32.193882   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:32.210675   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:32.210779   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:32.223819   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:32.223901   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:32.234720   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:32.234799   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:32.245392   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:32.245472   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:32.263208   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:32.263290   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:32.274161   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:32.274245   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:32.283915   13221 logs.go:282] 0 containers: []
	W1010 11:41:32.283926   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:32.283993   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:32.294457   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:32.294473   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:32.294478   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:32.317362   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:32.317369   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:32.351743   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:32.351754   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:32.368506   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:32.368522   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:32.380951   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:32.380965   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:32.396953   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:32.396966   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:32.409815   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:32.409826   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:32.425398   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:32.425409   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:32.440107   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:32.440117   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:32.453259   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:32.453271   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:32.494873   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:32.494888   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:32.499423   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:32.499431   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:32.514205   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:32.514217   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:32.529538   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:32.529553   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:32.546675   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:32.546689   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:32.587094   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:32.587108   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:32.606285   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:32.606297   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:35.120798   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:40.123155   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:40.123404   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:40.139619   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:40.139731   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:40.152308   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:40.152396   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:40.163704   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:40.163783   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:40.176235   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:40.176313   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:40.190690   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:40.190769   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:40.201592   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:40.201668   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:40.212990   13221 logs.go:282] 0 containers: []
	W1010 11:41:40.213003   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:40.213072   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:40.224732   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:40.224755   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:40.224761   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:40.229174   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:40.229185   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:40.245202   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:40.245215   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:40.258284   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:40.258295   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:40.276489   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:40.276507   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:40.289608   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:40.289618   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:40.305987   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:40.305996   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:40.347565   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:40.347578   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:40.384993   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:40.385004   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:40.430487   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:40.430498   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:40.445186   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:40.445197   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:40.461705   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:40.461717   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:40.474617   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:40.474630   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:40.499890   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:40.499912   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:40.512926   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:40.512945   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:40.528606   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:40.528616   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:40.543090   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:40.543100   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:43.057105   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:48.059214   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:48.059365   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:48.079183   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:48.079234   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:48.094191   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:48.094258   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:48.105885   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:48.105953   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:48.117102   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:48.117186   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:48.139164   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:48.139245   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:48.150675   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:48.150757   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:48.162102   13221 logs.go:282] 0 containers: []
	W1010 11:41:48.162114   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:48.162186   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:48.176915   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:48.176934   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:48.176939   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:48.193969   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:48.193979   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:48.206931   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:48.206945   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:48.225564   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:48.225578   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:48.238961   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:48.238980   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:48.251206   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:48.251219   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:48.264482   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:48.264496   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:48.306707   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:48.306717   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:48.344617   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:48.344629   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:48.358714   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:48.358726   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:48.386068   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:48.386081   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:48.401067   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:48.401079   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:48.415781   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:48.415795   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:48.427584   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:48.427596   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:48.446255   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:48.446265   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:48.450512   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:48.450519   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:48.488115   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:48.488125   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:51.003674   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:41:56.004247   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:41:56.004357   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:41:56.020999   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:41:56.021077   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:41:56.032278   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:41:56.032353   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:41:56.044358   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:41:56.044436   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:41:56.055789   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:41:56.055869   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:41:56.070689   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:41:56.070769   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:41:56.082282   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:41:56.082382   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:41:56.093694   13221 logs.go:282] 0 containers: []
	W1010 11:41:56.093705   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:41:56.093774   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:41:56.105947   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:41:56.105965   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:41:56.105970   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:41:56.145920   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:41:56.145933   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:41:56.163973   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:41:56.163990   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:41:56.176114   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:41:56.176126   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:41:56.195844   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:41:56.195857   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:41:56.200224   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:41:56.200231   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:41:56.214980   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:41:56.214988   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:41:56.255713   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:41:56.255727   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:41:56.268093   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:41:56.268103   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:41:56.279806   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:41:56.279816   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:41:56.297529   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:41:56.297542   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:41:56.333572   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:41:56.333586   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:41:56.345468   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:41:56.345478   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:41:56.370729   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:41:56.370741   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:41:56.385523   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:41:56.385536   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:41:56.400576   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:41:56.400588   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:41:56.414695   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:41:56.414705   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:41:58.928554   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:03.930731   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:03.930840   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:03.942655   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:03.942762   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:03.954199   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:03.954280   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:03.965962   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:03.966047   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:03.977329   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:03.977409   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:03.991072   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:03.991151   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:04.002558   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:04.002641   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:04.013681   13221 logs.go:282] 0 containers: []
	W1010 11:42:04.013692   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:04.013762   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:04.025769   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:04.025789   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:04.025794   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:04.064765   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:04.064781   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:04.077331   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:04.077343   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:04.117054   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:04.117069   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:04.132191   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:04.132205   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:04.144696   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:04.144708   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:04.157122   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:04.157137   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:04.176454   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:04.176469   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:04.181232   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:04.181239   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:04.198044   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:04.198060   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:04.239593   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:04.239605   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:04.254294   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:04.254304   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:04.265916   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:04.265928   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:04.278158   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:04.278175   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:04.289366   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:04.289381   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:04.313778   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:04.313786   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:04.328582   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:04.328593   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:06.848864   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:11.849207   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:11.849305   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:11.861052   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:11.861138   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:11.872039   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:11.872116   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:11.883021   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:11.883093   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:11.894294   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:11.894377   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:11.905816   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:11.905895   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:11.917355   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:11.917431   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:11.935377   13221 logs.go:282] 0 containers: []
	W1010 11:42:11.935386   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:11.935454   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:11.948654   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:11.948672   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:11.948677   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:11.986931   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:11.986947   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:11.991302   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:11.991308   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:12.028820   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:12.028830   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:12.042443   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:12.042453   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:12.057978   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:12.057989   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:12.070271   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:12.070281   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:12.082135   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:12.082145   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:12.120549   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:12.120562   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:12.133089   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:12.133102   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:12.147519   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:12.147529   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:12.161899   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:12.161908   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:12.173384   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:12.173396   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:12.197822   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:12.197835   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:12.213107   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:12.213117   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:12.230338   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:12.230349   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:12.244276   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:12.244285   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:14.757438   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:19.758494   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:19.758586   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:19.769935   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:19.770015   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:19.781282   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:19.781361   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:19.792783   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:19.792878   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:19.804428   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:19.804510   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:19.815899   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:19.815968   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:19.828077   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:19.828155   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:19.838631   13221 logs.go:282] 0 containers: []
	W1010 11:42:19.838643   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:19.838702   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:19.849491   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:19.849508   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:19.849514   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:19.864556   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:19.864566   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:19.878742   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:19.878752   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:19.889655   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:19.889667   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:19.913620   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:19.913628   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:19.952456   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:19.952470   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:19.967412   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:19.967421   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:19.981777   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:19.981787   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:19.997843   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:19.997855   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:20.010097   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:20.010112   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:20.014656   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:20.014663   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:20.057509   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:20.057523   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:20.069365   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:20.069381   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:20.086559   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:20.086572   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:20.123753   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:20.123767   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:20.138923   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:20.138937   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:20.153635   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:20.153646   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:22.667674   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:27.669844   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:27.670053   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:27.683262   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:27.683352   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:27.696720   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:27.696815   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:27.708678   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:27.708784   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:27.721412   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:27.721501   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:27.733100   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:27.733182   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:27.744152   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:27.744240   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:27.755070   13221 logs.go:282] 0 containers: []
	W1010 11:42:27.755080   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:27.755149   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:27.767764   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:27.767781   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:27.767787   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:27.783108   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:27.783123   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:27.794881   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:27.794896   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:27.806644   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:27.806658   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:27.821232   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:27.821246   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:27.832857   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:27.832867   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:27.846290   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:27.846302   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:27.868675   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:27.868681   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:27.904603   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:27.904617   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:27.909350   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:27.909355   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:27.923013   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:27.923027   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:27.963108   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:27.963123   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:27.975289   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:27.975304   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:27.986204   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:27.986214   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:27.998426   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:27.998434   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:28.037772   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:28.037782   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:28.055099   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:28.055110   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:30.572627   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:35.574870   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:35.574982   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:35.586943   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:35.587026   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:35.597835   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:35.597927   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:35.608553   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:35.608629   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:35.622051   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:35.622136   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:35.632730   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:35.632799   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:35.644023   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:35.644101   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:35.653819   13221 logs.go:282] 0 containers: []
	W1010 11:42:35.653834   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:35.653901   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:35.664354   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:35.664371   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:35.664377   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:35.676541   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:35.676552   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:35.716129   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:35.716139   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:35.731377   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:35.731388   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:35.747004   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:35.747015   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:35.760893   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:35.760906   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:35.784831   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:35.784839   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:35.822152   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:35.822163   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:35.836340   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:35.836349   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:35.848048   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:35.848061   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:35.859652   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:35.859663   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:35.864227   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:35.864234   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:35.900944   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:35.900954   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:35.915026   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:35.915036   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:35.928740   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:35.928753   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:35.941065   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:35.941074   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:35.959979   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:35.959989   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:38.477622   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:43.479812   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:43.479945   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:43.490947   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:43.491031   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:43.505505   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:43.505583   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:43.522773   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:43.522853   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:43.533833   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:43.533923   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:43.544232   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:43.544312   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:43.555180   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:43.555259   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:43.565258   13221 logs.go:282] 0 containers: []
	W1010 11:42:43.565273   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:43.565340   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:43.582276   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:43.582299   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:43.582304   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:43.620708   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:43.620720   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:43.634635   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:43.634644   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:43.646578   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:43.646591   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:43.650826   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:43.650832   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:43.664787   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:43.664797   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:43.679305   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:43.679314   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:43.690331   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:43.690344   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:43.727067   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:43.727077   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:43.767201   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:43.767215   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:43.778524   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:43.778534   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:43.795809   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:43.795823   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:43.819599   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:43.819606   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:43.832529   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:43.832540   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:43.847203   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:43.847213   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:43.862452   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:43.862461   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:43.879194   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:43.879204   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:46.393764   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:51.395982   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:51.396088   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:51.419272   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:51.419353   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:51.437581   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:51.437664   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:51.448686   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:51.448771   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:51.459898   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:51.459980   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:51.470211   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:51.470286   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:51.481675   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:51.481766   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:51.491741   13221 logs.go:282] 0 containers: []
	W1010 11:42:51.491755   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:51.491814   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:51.502353   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:51.502371   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:51.502376   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:51.515042   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:51.515053   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:51.532981   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:51.532991   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:51.546605   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:51.546615   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:51.559012   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:51.559023   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:51.596415   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:51.596423   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:51.610514   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:51.610527   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:51.624151   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:51.624163   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:51.639470   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:51.639479   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:42:51.653782   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:51.653792   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:51.664898   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:51.664909   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:51.687510   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:51.687520   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:51.698867   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:51.698876   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:51.722296   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:51.722307   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:51.726652   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:51.726661   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:51.767542   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:51.767552   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:51.780605   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:51.780640   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:54.317788   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:42:59.319987   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:42:59.320151   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:42:59.332196   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:42:59.332282   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:42:59.349989   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:42:59.350073   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:42:59.362522   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:42:59.362600   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:42:59.373975   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:42:59.374058   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:42:59.384871   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:42:59.384958   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:42:59.396356   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:42:59.396433   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:42:59.410538   13221 logs.go:282] 0 containers: []
	W1010 11:42:59.410550   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:42:59.410623   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:42:59.421817   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:42:59.421835   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:42:59.421840   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:42:59.436505   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:42:59.436515   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:42:59.476243   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:42:59.476253   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:42:59.487843   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:42:59.487854   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:42:59.500650   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:42:59.500660   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:42:59.512946   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:42:59.512960   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:42:59.536897   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:42:59.536910   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:42:59.576529   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:42:59.576538   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:42:59.580875   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:42:59.580883   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:42:59.592447   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:42:59.592459   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:42:59.605936   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:42:59.605950   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:42:59.617504   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:42:59.617515   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:42:59.631975   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:42:59.631989   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:42:59.644373   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:42:59.644383   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:42:59.659476   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:42:59.659490   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:42:59.678083   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:42:59.678098   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:42:59.712489   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:42:59.712503   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:43:02.228722   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:07.231034   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:07.231162   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:07.244507   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:43:07.244598   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:07.257899   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:43:07.257972   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:07.268619   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:43:07.268697   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:07.279904   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:43:07.279987   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:07.290673   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:43:07.290758   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:07.305662   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:43:07.305737   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:07.316111   13221 logs.go:282] 0 containers: []
	W1010 11:43:07.316128   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:07.316194   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:07.326948   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:43:07.326965   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:43:07.326970   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:43:07.338253   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:43:07.338267   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:43:07.359269   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:43:07.359283   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:43:07.373489   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:07.373503   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:07.377525   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:43:07.377533   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:43:07.415973   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:43:07.415986   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:43:07.432723   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:43:07.432735   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:43:07.445827   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:43:07.445839   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:43:07.457891   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:43:07.457903   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:07.470261   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:07.470271   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:07.509269   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:43:07.509291   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:43:07.526854   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:07.526865   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:07.548938   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:07.548945   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:07.584374   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:43:07.584384   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:43:07.602275   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:43:07.602285   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:43:07.616221   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:43:07.616231   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:43:07.635395   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:43:07.635405   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:43:10.148510   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:15.150807   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:15.151025   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:15.165589   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:43:15.165689   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:15.177710   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:43:15.177794   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:15.188367   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:43:15.188444   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:15.199721   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:43:15.199807   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:15.214935   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:43:15.215005   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:15.225990   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:43:15.226061   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:15.236191   13221 logs.go:282] 0 containers: []
	W1010 11:43:15.236208   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:15.236276   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:15.247185   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:43:15.247200   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:43:15.247206   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:43:15.262224   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:43:15.262235   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:43:15.276368   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:15.276377   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:15.299732   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:43:15.299740   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:43:15.321294   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:43:15.321303   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:43:15.337795   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:15.337810   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:15.376671   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:15.376681   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:15.380887   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:15.380892   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:15.414663   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:43:15.414673   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:43:15.426814   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:43:15.426826   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:43:15.439507   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:43:15.439517   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:43:15.451873   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:43:15.451885   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:43:15.466540   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:43:15.466553   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:15.479046   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:43:15.479056   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:43:15.523101   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:43:15.523114   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:43:15.537565   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:43:15.537582   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:43:15.566041   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:43:15.566056   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:43:18.086562   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:23.088838   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:23.089057   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:43:23.107141   13221 logs.go:282] 2 containers: [6d2c9f0e9fd9 8e4c05f7b12f]
	I1010 11:43:23.107246   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:43:23.119885   13221 logs.go:282] 2 containers: [4c8ac2007295 c10b1f623e8e]
	I1010 11:43:23.119965   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:43:23.134622   13221 logs.go:282] 1 containers: [c5566b127d8e]
	I1010 11:43:23.134704   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:43:23.145193   13221 logs.go:282] 2 containers: [e38701c41a97 295994b875c3]
	I1010 11:43:23.145274   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:43:23.155666   13221 logs.go:282] 1 containers: [eb6a1a0ae320]
	I1010 11:43:23.155745   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:43:23.166374   13221 logs.go:282] 2 containers: [2375c7f31bae d0634f9bbbf3]
	I1010 11:43:23.166453   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:43:23.176064   13221 logs.go:282] 0 containers: []
	W1010 11:43:23.176079   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:43:23.176142   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:43:23.186804   13221 logs.go:282] 2 containers: [7aca50d00362 3259db2ff77d]
	I1010 11:43:23.186819   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:43:23.186825   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:43:23.199878   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:43:23.199888   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:43:23.234134   13221 logs.go:123] Gathering logs for etcd [c10b1f623e8e] ...
	I1010 11:43:23.234149   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10b1f623e8e"
	I1010 11:43:23.253473   13221 logs.go:123] Gathering logs for coredns [c5566b127d8e] ...
	I1010 11:43:23.253484   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5566b127d8e"
	I1010 11:43:23.274085   13221 logs.go:123] Gathering logs for kube-scheduler [295994b875c3] ...
	I1010 11:43:23.274097   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 295994b875c3"
	I1010 11:43:23.298719   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:43:23.298735   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:43:23.322003   13221 logs.go:123] Gathering logs for kube-apiserver [6d2c9f0e9fd9] ...
	I1010 11:43:23.322013   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d2c9f0e9fd9"
	I1010 11:43:23.339455   13221 logs.go:123] Gathering logs for kube-proxy [eb6a1a0ae320] ...
	I1010 11:43:23.339469   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb6a1a0ae320"
	I1010 11:43:23.350818   13221 logs.go:123] Gathering logs for storage-provisioner [3259db2ff77d] ...
	I1010 11:43:23.350827   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3259db2ff77d"
	I1010 11:43:23.363190   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:43:23.363199   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:43:23.400607   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:43:23.400623   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:43:23.404885   13221 logs.go:123] Gathering logs for kube-apiserver [8e4c05f7b12f] ...
	I1010 11:43:23.404891   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e4c05f7b12f"
	I1010 11:43:23.472516   13221 logs.go:123] Gathering logs for storage-provisioner [7aca50d00362] ...
	I1010 11:43:23.472528   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aca50d00362"
	I1010 11:43:23.488377   13221 logs.go:123] Gathering logs for etcd [4c8ac2007295] ...
	I1010 11:43:23.488386   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8ac2007295"
	I1010 11:43:23.502285   13221 logs.go:123] Gathering logs for kube-scheduler [e38701c41a97] ...
	I1010 11:43:23.502294   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e38701c41a97"
	I1010 11:43:23.513956   13221 logs.go:123] Gathering logs for kube-controller-manager [2375c7f31bae] ...
	I1010 11:43:23.513968   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2375c7f31bae"
	I1010 11:43:23.531753   13221 logs.go:123] Gathering logs for kube-controller-manager [d0634f9bbbf3] ...
	I1010 11:43:23.531763   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0634f9bbbf3"
	I1010 11:43:26.047523   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:31.049819   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:31.049883   13221 kubeadm.go:597] duration metric: took 4m3.9252285s to restartPrimaryControlPlane
	W1010 11:43:31.049956   13221 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 11:43:31.049984   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1010 11:43:32.086614   13221 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.036629584s)
	I1010 11:43:32.086685   13221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 11:43:32.092003   13221 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 11:43:32.094936   13221 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 11:43:32.097596   13221 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 11:43:32.097602   13221 kubeadm.go:157] found existing configuration files:
	
	I1010 11:43:32.097635   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/admin.conf
	I1010 11:43:32.100097   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 11:43:32.100125   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 11:43:32.103619   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/kubelet.conf
	I1010 11:43:32.106795   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 11:43:32.106821   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 11:43:32.109858   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/controller-manager.conf
	I1010 11:43:32.112296   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 11:43:32.112324   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 11:43:32.115263   13221 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/scheduler.conf
	I1010 11:43:32.118401   13221 kubeadm.go:163] "https://control-plane.minikube.internal:53577" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53577 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 11:43:32.118426   13221 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 11:43:32.121341   13221 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 11:43:32.139680   13221 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1010 11:43:32.139828   13221 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 11:43:32.190251   13221 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 11:43:32.190317   13221 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 11:43:32.190366   13221 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 11:43:32.239141   13221 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 11:43:32.242257   13221 out.go:235]   - Generating certificates and keys ...
	I1010 11:43:32.242290   13221 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 11:43:32.242323   13221 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 11:43:32.242364   13221 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 11:43:32.242398   13221 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 11:43:32.242435   13221 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 11:43:32.242474   13221 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 11:43:32.242513   13221 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 11:43:32.242553   13221 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 11:43:32.242604   13221 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 11:43:32.242652   13221 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 11:43:32.242675   13221 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 11:43:32.242701   13221 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 11:43:32.326327   13221 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 11:43:32.445249   13221 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 11:43:32.537370   13221 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 11:43:32.592360   13221 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 11:43:32.623577   13221 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 11:43:32.623958   13221 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 11:43:32.624015   13221 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 11:43:32.715618   13221 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 11:43:32.719596   13221 out.go:235]   - Booting up control plane ...
	I1010 11:43:32.719647   13221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 11:43:32.719683   13221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 11:43:32.719713   13221 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 11:43:32.719758   13221 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 11:43:32.719872   13221 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 11:43:37.218193   13221 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504063 seconds
	I1010 11:43:37.218285   13221 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 11:43:37.221843   13221 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 11:43:37.741350   13221 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 11:43:37.741638   13221 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-616000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 11:43:38.247856   13221 kubeadm.go:310] [bootstrap-token] Using token: 6se1ez.f9ly5chl6izab28p
	I1010 11:43:38.256798   13221 out.go:235]   - Configuring RBAC rules ...
	I1010 11:43:38.256879   13221 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 11:43:38.259912   13221 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 11:43:38.262708   13221 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 11:43:38.263845   13221 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 11:43:38.264992   13221 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 11:43:38.266130   13221 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 11:43:38.270153   13221 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 11:43:38.462179   13221 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 11:43:38.661378   13221 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 11:43:38.661946   13221 kubeadm.go:310] 
	I1010 11:43:38.661978   13221 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 11:43:38.662021   13221 kubeadm.go:310] 
	I1010 11:43:38.662079   13221 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 11:43:38.662086   13221 kubeadm.go:310] 
	I1010 11:43:38.662097   13221 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 11:43:38.662132   13221 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 11:43:38.662178   13221 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 11:43:38.662185   13221 kubeadm.go:310] 
	I1010 11:43:38.662252   13221 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 11:43:38.662260   13221 kubeadm.go:310] 
	I1010 11:43:38.662318   13221 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 11:43:38.662322   13221 kubeadm.go:310] 
	I1010 11:43:38.662345   13221 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 11:43:38.662416   13221 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 11:43:38.662471   13221 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 11:43:38.662492   13221 kubeadm.go:310] 
	I1010 11:43:38.662530   13221 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 11:43:38.662628   13221 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 11:43:38.662633   13221 kubeadm.go:310] 
	I1010 11:43:38.662750   13221 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6se1ez.f9ly5chl6izab28p \
	I1010 11:43:38.662873   13221 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79bae548dd16cfe936348b462da3f6d7ee9037c9333f736d9e5d628396c7b6e1 \
	I1010 11:43:38.662887   13221 kubeadm.go:310] 	--control-plane 
	I1010 11:43:38.662892   13221 kubeadm.go:310] 
	I1010 11:43:38.662931   13221 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 11:43:38.662935   13221 kubeadm.go:310] 
	I1010 11:43:38.662986   13221 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6se1ez.f9ly5chl6izab28p \
	I1010 11:43:38.663065   13221 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79bae548dd16cfe936348b462da3f6d7ee9037c9333f736d9e5d628396c7b6e1 
	I1010 11:43:38.663152   13221 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 11:43:38.663161   13221 cni.go:84] Creating CNI manager for ""
	I1010 11:43:38.663169   13221 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:43:38.666874   13221 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 11:43:38.673850   13221 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 11:43:38.676842   13221 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 11:43:38.681768   13221 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 11:43:38.681820   13221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 11:43:38.681839   13221 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-616000 minikube.k8s.io/updated_at=2024_10_10T11_43_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=stopped-upgrade-616000 minikube.k8s.io/primary=true
	I1010 11:43:38.684907   13221 ops.go:34] apiserver oom_adj: -16
	I1010 11:43:38.727442   13221 kubeadm.go:1113] duration metric: took 45.666375ms to wait for elevateKubeSystemPrivileges
	I1010 11:43:38.727530   13221 kubeadm.go:394] duration metric: took 4m11.616681416s to StartCluster
	I1010 11:43:38.727543   13221 settings.go:142] acquiring lock: {Name:mkc38780b398d6ae1b1dc4b65b91e70a285222f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:43:38.727642   13221 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:43:38.728077   13221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/kubeconfig: {Name:mk76f18909e94718c05e51991d3ea4660849ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:43:38.728291   13221 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:43:38.728340   13221 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 11:43:38.728382   13221 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:43:38.728385   13221 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-616000"
	I1010 11:43:38.728392   13221 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-616000"
	W1010 11:43:38.728396   13221 addons.go:243] addon storage-provisioner should already be in state true
	I1010 11:43:38.728408   13221 host.go:66] Checking if "stopped-upgrade-616000" exists ...
	I1010 11:43:38.728416   13221 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-616000"
	I1010 11:43:38.728425   13221 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-616000"
	I1010 11:43:38.732675   13221 out.go:177] * Verifying Kubernetes components...
	I1010 11:43:38.733354   13221 kapi.go:59] client config for stopped-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/stopped-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/19787-10623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102322a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 11:43:38.737095   13221 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-616000"
	W1010 11:43:38.737099   13221 addons.go:243] addon default-storageclass should already be in state true
	I1010 11:43:38.737106   13221 host.go:66] Checking if "stopped-upgrade-616000" exists ...
	I1010 11:43:38.737694   13221 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 11:43:38.737700   13221 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 11:43:38.737705   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1010 11:43:38.740811   13221 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 11:43:38.744871   13221 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 11:43:38.750951   13221 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 11:43:38.750959   13221 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 11:43:38.750967   13221 sshutil.go:53] new ssh client: &{IP:localhost Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1010 11:43:38.840320   13221 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 11:43:38.846147   13221 api_server.go:52] waiting for apiserver process to appear ...
	I1010 11:43:38.846203   13221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 11:43:38.849998   13221 api_server.go:72] duration metric: took 121.698542ms to wait for apiserver process to appear ...
	I1010 11:43:38.850005   13221 api_server.go:88] waiting for apiserver healthz status ...
	I1010 11:43:38.850012   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:38.883825   13221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 11:43:38.903650   13221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 11:43:39.257790   13221 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1010 11:43:39.257801   13221 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1010 11:43:43.852036   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:43.852066   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:48.852498   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:48.852517   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:53.852832   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:53.852887   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:43:58.853600   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:43:58.853638   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:03.854309   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:03.854346   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:08.855222   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:08.855257   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1010 11:44:09.259915   13221 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1010 11:44:09.264277   13221 out.go:177] * Enabled addons: storage-provisioner
	I1010 11:44:09.270211   13221 addons.go:510] duration metric: took 30.542176625s for enable addons: enabled=[storage-provisioner]
	I1010 11:44:13.856323   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:13.856347   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:18.857670   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:18.857722   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:23.859644   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:23.859683   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:28.861913   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:28.861962   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:33.862970   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:33.862985   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:38.865117   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:38.865258   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:44:38.894644   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:44:38.894719   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:44:38.906093   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:44:38.906171   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:44:38.916355   13221 logs.go:282] 2 containers: [d5432a0dc833 a54944ee99d4]
	I1010 11:44:38.916436   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:44:38.927331   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:44:38.927409   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:44:38.938210   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:44:38.938299   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:44:38.948439   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:44:38.948511   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:44:38.958304   13221 logs.go:282] 0 containers: []
	W1010 11:44:38.958314   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:44:38.958383   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:44:38.968627   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:44:38.968645   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:44:38.968650   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:44:38.980017   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:44:38.980026   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:44:38.995732   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:44:38.995745   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:44:39.013548   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:44:39.013564   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:44:39.025342   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:44:39.025356   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:44:39.050860   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:44:39.050869   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:44:39.086199   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:44:39.086206   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:44:39.124129   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:44:39.124141   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:44:39.139618   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:44:39.139629   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:44:39.154057   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:44:39.154067   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:44:39.166792   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:44:39.166803   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:44:39.178566   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:44:39.178576   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:44:39.190785   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:44:39.190796   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:44:41.697506   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:46.700099   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:46.700206   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:44:46.711512   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:44:46.711594   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:44:46.722193   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:44:46.722272   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:44:46.732822   13221 logs.go:282] 2 containers: [d5432a0dc833 a54944ee99d4]
	I1010 11:44:46.732890   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:44:46.743899   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:44:46.743977   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:44:46.754524   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:44:46.754616   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:44:46.765224   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:44:46.765301   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:44:46.775317   13221 logs.go:282] 0 containers: []
	W1010 11:44:46.775327   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:44:46.775384   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:44:46.785431   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:44:46.785448   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:44:46.785455   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:44:46.800128   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:44:46.800140   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:44:46.804945   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:44:46.804954   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:44:46.819359   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:44:46.819370   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:44:46.834949   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:44:46.834959   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:44:46.848091   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:44:46.848101   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:44:46.872637   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:44:46.872653   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:44:46.884427   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:44:46.884438   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:44:46.902175   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:44:46.902187   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:44:46.936555   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:44:46.936576   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:44:46.971928   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:44:46.971939   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:44:46.986825   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:44:46.986838   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:44:47.001065   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:44:47.001079   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:44:49.516009   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:44:54.518833   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:44:54.519354   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:44:54.557012   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:44:54.557131   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:44:54.576444   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:44:54.576532   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:44:54.590723   13221 logs.go:282] 2 containers: [d5432a0dc833 a54944ee99d4]
	I1010 11:44:54.590801   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:44:54.601489   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:44:54.601573   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:44:54.612257   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:44:54.612343   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:44:54.623028   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:44:54.623103   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:44:54.633349   13221 logs.go:282] 0 containers: []
	W1010 11:44:54.633362   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:44:54.633425   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:44:54.643635   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:44:54.643657   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:44:54.643662   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:44:54.655413   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:44:54.655426   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:44:54.666740   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:44:54.666750   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:44:54.690566   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:44:54.690573   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:44:54.703815   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:44:54.703824   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:44:54.708598   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:44:54.708603   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:44:54.744188   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:44:54.744198   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:44:54.759233   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:44:54.759244   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:44:54.772176   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:44:54.772186   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:44:54.787389   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:44:54.787398   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:44:54.799726   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:44:54.799735   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:44:54.820621   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:44:54.820631   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:44:54.855798   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:44:54.855808   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:44:57.370239   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:45:02.372205   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:45:02.372751   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:45:02.410258   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:45:02.410407   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:45:02.432042   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:45:02.432170   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:45:02.447072   13221 logs.go:282] 2 containers: [d5432a0dc833 a54944ee99d4]
	I1010 11:45:02.447158   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:45:02.464099   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:45:02.464184   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:45:02.475040   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:45:02.475116   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:45:02.486321   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:45:02.486400   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:45:02.496907   13221 logs.go:282] 0 containers: []
	W1010 11:45:02.496919   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:45:02.496987   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:45:02.507874   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:45:02.507892   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:45:02.507898   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:45:02.512533   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:45:02.512541   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:45:02.532142   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:45:02.532155   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:45:02.543753   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:45:02.543766   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:45:02.557027   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:45:02.557038   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:45:02.568830   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:45:02.568840   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:45:02.602673   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:45:02.602682   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:45:02.637534   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:45:02.637547   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:45:02.653160   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:45:02.653173   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:45:02.665739   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:45:02.665752   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:45:02.684236   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:45:02.684248   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:45:02.695805   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:45:02.695816   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:45:02.712681   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:45:02.712694   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:45:05.240359   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:45:10.241300   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:45:10.241373   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:45:10.253610   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:45:10.253683   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:45:10.268861   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:45:10.268926   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:45:10.280519   13221 logs.go:282] 2 containers: [d5432a0dc833 a54944ee99d4]
	I1010 11:45:10.280593   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:45:10.299488   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:45:10.299555   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:45:10.310258   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:45:10.310323   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:45:10.324623   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:45:10.324694   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:45:10.335619   13221 logs.go:282] 0 containers: []
	W1010 11:45:10.335629   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:45:10.335680   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:45:10.346943   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:45:10.346956   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:45:10.346962   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:45:10.359438   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:45:10.359447   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:45:10.374212   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:45:10.374221   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:45:10.386961   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:45:10.386970   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:45:10.421465   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:45:10.421472   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:45:10.425703   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:45:10.425711   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:45:10.460771   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:45:10.460781   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:45:10.475109   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:45:10.475119   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:45:10.486627   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:45:10.486636   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:45:10.510886   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:45:10.510892   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:45:10.523253   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:45:10.523267   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:45:10.537311   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:45:10.537320   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:45:10.551474   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:45:10.551490   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:45:13.070772   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:45:18.072710   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:45:18.073216   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:45:18.113465   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:45:18.113626   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:45:18.135881   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:45:18.136011   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:45:18.158329   13221 logs.go:282] 2 containers: [d5432a0dc833 a54944ee99d4]
	I1010 11:45:18.158420   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:45:18.169538   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:45:18.169615   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:45:18.180007   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:45:18.180081   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:45:18.190689   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:45:18.190765   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:45:18.200852   13221 logs.go:282] 0 containers: []
	W1010 11:45:18.200865   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:45:18.200926   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:45:18.218278   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:45:18.218294   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:45:18.218300   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:45:18.222611   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:45:18.222616   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:45:18.241278   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:45:18.241288   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:45:18.252908   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:45:18.252918   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:45:18.263998   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:45:18.264007   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:45:18.275492   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:45:18.275505   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:45:18.286640   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:45:18.286652   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:45:18.309378   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:45:18.309384   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:45:18.320615   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:45:18.320625   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:45:18.353532   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:45:18.353539   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:45:18.388529   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:45:18.388542   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:45:18.402419   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:45:18.402428   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:45:18.417817   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:45:18.417829   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:45:20.936398   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:45:25.938657   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:45:25.939260   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:45:25.980474   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:45:25.980631   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:45:26.001650   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:45:26.001776   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:45:26.016138   13221 logs.go:282] 2 containers: [d5432a0dc833 a54944ee99d4]
	I1010 11:45:26.016214   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:45:26.028941   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:45:26.029010   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:45:26.040337   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:45:26.040413   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:45:26.051067   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:45:26.051146   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:45:26.061429   13221 logs.go:282] 0 containers: []
	W1010 11:45:26.061439   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:45:26.061504   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:45:26.071951   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:45:26.071966   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:45:26.071973   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:45:26.089682   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:45:26.089694   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:45:26.101570   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:45:26.101583   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:45:26.134788   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:45:26.134797   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:45:26.150264   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:45:26.150274   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:45:26.164594   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:45:26.164603   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:45:26.178512   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:45:26.178524   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:45:26.190465   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:45:26.190478   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:45:26.205250   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:45:26.205262   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:45:26.219108   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:45:26.219118   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:45:26.242568   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:45:26.242574   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:45:26.246453   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:45:26.246462   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:45:26.283677   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:45:26.283688   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:45:28.795972   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:45:33.798287   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:45:33.798598   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:45:33.826970   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:45:33.827106   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:45:33.844741   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:45:33.844834   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:45:33.857754   13221 logs.go:282] 2 containers: [d5432a0dc833 a54944ee99d4]
	I1010 11:45:33.857840   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:45:33.868821   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:45:33.868900   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:45:33.879503   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:45:33.879584   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:45:33.890057   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:45:33.890132   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:45:33.901923   13221 logs.go:282] 0 containers: []
	W1010 11:45:33.901932   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:45:33.901999   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:45:33.912074   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:45:33.912092   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:45:33.912098   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:45:33.947482   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:45:33.947492   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:45:33.951905   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:45:33.951913   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:45:33.989970   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:45:33.989984   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:45:34.004258   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:45:34.004270   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:45:34.017950   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:45:34.017959   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:45:34.034371   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:45:34.034384   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:45:34.046053   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:45:34.046062   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:45:34.067471   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:45:34.067483   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:45:34.079183   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:45:34.079192   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:45:34.096070   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:45:34.096085   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:45:34.116758   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:45:34.116768   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:45:34.140875   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:45:34.140885   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:45:36.654466   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:45:41.656614   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:45:41.656803   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:45:41.669752   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:45:41.669842   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:45:41.680578   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:45:41.680658   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:45:41.691309   13221 logs.go:282] 2 containers: [d5432a0dc833 a54944ee99d4]
	I1010 11:45:41.691386   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:45:41.705304   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:45:41.705379   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:45:41.715441   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:45:41.715527   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:45:41.726221   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:45:41.726299   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:45:41.737776   13221 logs.go:282] 0 containers: []
	W1010 11:45:41.737787   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:45:41.737850   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:45:41.748075   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:45:41.748090   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:45:41.748096   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:45:41.781853   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:45:41.781860   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:45:41.785806   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:45:41.785815   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:45:41.801003   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:45:41.801016   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:45:41.812772   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:45:41.812785   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:45:41.827879   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:45:41.827889   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:45:41.846627   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:45:41.846640   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:45:41.885011   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:45:41.885023   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:45:41.899126   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:45:41.899135   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:45:41.915287   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:45:41.915298   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:45:41.926761   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:45:41.926771   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:45:41.938608   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:45:41.938619   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:45:41.963305   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:45:41.963312   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:45:44.476649   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:45:49.475807   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:45:49.476327   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:45:49.516750   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:45:49.516908   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:45:49.538986   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:45:49.539120   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:45:49.554394   13221 logs.go:282] 2 containers: [d5432a0dc833 a54944ee99d4]
	I1010 11:45:49.554478   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:45:49.570880   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:45:49.570963   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:45:49.581514   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:45:49.581588   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:45:49.592637   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:45:49.592717   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:45:49.602699   13221 logs.go:282] 0 containers: []
	W1010 11:45:49.602710   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:45:49.602777   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:45:49.614570   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:45:49.614585   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:45:49.614590   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:45:49.626850   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:45:49.626863   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:45:49.639195   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:45:49.639207   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:45:49.655040   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:45:49.655052   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:45:49.690019   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:45:49.690026   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:45:49.694130   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:45:49.694139   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:45:49.729191   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:45:49.729204   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:45:49.744083   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:45:49.744094   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:45:49.758187   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:45:49.758199   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:45:49.779523   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:45:49.779536   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:45:49.803174   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:45:49.803184   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:45:49.821740   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:45:49.821752   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:45:49.833125   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:45:49.833136   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:45:52.344879   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:45:57.344033   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:45:57.344515   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:45:57.383543   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:45:57.383684   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:45:57.405796   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:45:57.405898   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:45:57.420643   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:45:57.420714   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:45:57.436855   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:45:57.436931   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:45:57.447434   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:45:57.447510   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:45:57.458133   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:45:57.458217   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:45:57.468049   13221 logs.go:282] 0 containers: []
	W1010 11:45:57.468060   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:45:57.468117   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:45:57.478390   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:45:57.478406   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:45:57.478411   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:45:57.482742   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:45:57.482752   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:45:57.493917   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:45:57.493928   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:45:57.505466   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:45:57.505476   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:45:57.519837   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:45:57.519852   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:45:57.531928   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:45:57.531938   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:45:57.547242   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:45:57.547254   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:45:57.582571   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:45:57.582579   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:45:57.594574   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:45:57.594584   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:45:57.608431   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:45:57.608444   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:45:57.620609   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:45:57.620621   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:45:57.654902   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:45:57.654912   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:45:57.668867   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:45:57.668880   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:45:57.682646   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:45:57.682660   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:45:57.699370   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:45:57.699379   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:46:00.224423   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:46:05.226248   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:46:05.226912   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:46:05.263569   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:46:05.263732   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:46:05.285685   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:46:05.285788   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:46:05.301354   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:46:05.301441   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:46:05.313357   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:46:05.313439   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:46:05.324223   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:46:05.324293   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:46:05.335289   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:46:05.335369   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:46:05.346063   13221 logs.go:282] 0 containers: []
	W1010 11:46:05.346074   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:46:05.346140   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:46:05.357588   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:46:05.357605   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:46:05.357610   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:46:05.374931   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:46:05.374941   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:46:05.387279   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:46:05.387291   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:46:05.411449   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:46:05.411457   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:46:05.446594   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:46:05.446602   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:46:05.450947   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:46:05.450955   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:46:05.462250   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:46:05.462262   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:46:05.473888   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:46:05.473901   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:46:05.488917   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:46:05.488929   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:46:05.524394   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:46:05.524404   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:46:05.538976   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:46:05.538988   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:46:05.557535   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:46:05.557546   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:46:05.570059   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:46:05.570072   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:46:05.581513   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:46:05.581528   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:46:05.593642   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:46:05.593655   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:46:08.107768   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:46:13.109884   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:46:13.109970   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:46:13.121351   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:46:13.121416   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:46:13.133433   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:46:13.133509   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:46:13.146624   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:46:13.146685   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:46:13.159173   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:46:13.159246   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:46:13.172370   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:46:13.172457   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:46:13.183302   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:46:13.183375   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:46:13.196982   13221 logs.go:282] 0 containers: []
	W1010 11:46:13.196994   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:46:13.197049   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:46:13.207623   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:46:13.207639   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:46:13.207644   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:46:13.221564   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:46:13.221575   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:46:13.238474   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:46:13.238485   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:46:13.252973   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:46:13.252985   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:46:13.290677   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:46:13.290687   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:46:13.306891   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:46:13.306902   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:46:13.325582   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:46:13.325596   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:46:13.330302   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:46:13.330314   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:46:13.345077   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:46:13.345091   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:46:13.369940   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:46:13.369950   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:46:13.383134   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:46:13.383145   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:46:13.399000   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:46:13.399010   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:46:13.410979   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:46:13.410988   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:46:13.448257   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:46:13.448273   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:46:13.460815   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:46:13.460829   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:46:15.975928   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:46:20.977989   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:46:20.978622   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:46:21.020657   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:46:21.020772   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:46:21.041081   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:46:21.041173   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:46:21.058167   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:46:21.058256   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:46:21.072029   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:46:21.072125   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:46:21.086584   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:46:21.086673   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:46:21.100407   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:46:21.100482   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:46:21.113114   13221 logs.go:282] 0 containers: []
	W1010 11:46:21.113128   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:46:21.113197   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:46:21.126466   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:46:21.126489   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:46:21.126495   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:46:21.147064   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:46:21.147081   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:46:21.160698   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:46:21.160712   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:46:21.208114   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:46:21.208135   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:46:21.226280   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:46:21.226291   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:46:21.242001   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:46:21.242011   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:46:21.268342   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:46:21.268349   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:46:21.303915   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:46:21.303922   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:46:21.317917   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:46:21.317927   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:46:21.331555   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:46:21.331565   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:46:21.344011   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:46:21.344020   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:46:21.356192   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:46:21.356202   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:46:21.360246   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:46:21.360255   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:46:21.374205   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:46:21.374215   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:46:21.388444   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:46:21.388453   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:46:23.900629   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:46:28.902548   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:46:28.902834   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:46:28.926096   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:46:28.926207   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:46:28.944885   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:46:28.944963   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:46:28.956160   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:46:28.956238   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:46:28.973744   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:46:28.973820   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:46:28.984776   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:46:28.984843   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:46:28.995234   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:46:28.995302   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:46:29.005321   13221 logs.go:282] 0 containers: []
	W1010 11:46:29.005332   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:46:29.005398   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:46:29.015858   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:46:29.015876   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:46:29.015885   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:46:29.020226   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:46:29.020232   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:46:29.035967   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:46:29.035979   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:46:29.050470   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:46:29.050480   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:46:29.062436   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:46:29.062448   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:46:29.080255   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:46:29.080264   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:46:29.092163   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:46:29.092175   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:46:29.103102   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:46:29.103111   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:46:29.128209   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:46:29.128217   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:46:29.139816   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:46:29.139827   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:46:29.151182   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:46:29.151196   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:46:29.166225   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:46:29.166238   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:46:29.200665   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:46:29.200671   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:46:29.238768   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:46:29.238782   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:46:29.250703   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:46:29.250713   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:46:31.764487   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:46:36.767111   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:46:36.767210   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:46:36.782795   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:46:36.782871   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:46:36.798379   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:46:36.798466   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:46:36.810204   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:46:36.810307   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:46:36.822667   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:46:36.822761   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:46:36.841240   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:46:36.841335   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:46:36.853068   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:46:36.853161   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:46:36.866794   13221 logs.go:282] 0 containers: []
	W1010 11:46:36.866806   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:46:36.866855   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:46:36.878459   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:46:36.878478   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:46:36.878484   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:46:36.883093   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:46:36.883101   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:46:36.895196   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:46:36.895208   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:46:36.930791   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:46:36.930810   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:46:36.946238   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:46:36.946252   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:46:36.961075   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:46:36.961088   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:46:36.974038   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:46:36.974050   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:46:37.006369   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:46:37.006380   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:46:37.022305   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:46:37.022317   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:46:37.035568   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:46:37.035581   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:46:37.047807   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:46:37.047819   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:46:37.084671   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:46:37.084684   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:46:37.109782   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:46:37.109797   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:46:37.123494   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:46:37.123503   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:46:37.143446   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:46:37.143462   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:46:39.663648   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:46:44.666443   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:46:44.666659   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:46:44.679533   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:46:44.679618   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:46:44.689784   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:46:44.689863   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:46:44.700512   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:46:44.700590   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:46:44.710919   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:46:44.710986   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:46:44.721107   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:46:44.721175   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:46:44.731537   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:46:44.731604   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:46:44.741454   13221 logs.go:282] 0 containers: []
	W1010 11:46:44.741465   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:46:44.741534   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:46:44.752538   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:46:44.752560   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:46:44.752566   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:46:44.787797   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:46:44.787807   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:46:44.799652   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:46:44.799666   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:46:44.810786   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:46:44.810799   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:46:44.822453   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:46:44.822468   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:46:44.826758   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:46:44.826767   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:46:44.851302   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:46:44.851308   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:46:44.862348   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:46:44.862361   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:46:44.876478   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:46:44.876491   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:46:44.888730   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:46:44.888742   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:46:44.900674   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:46:44.900688   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:46:44.915961   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:46:44.915971   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:46:44.933617   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:46:44.933627   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:46:44.967196   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:46:44.967206   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:46:44.981735   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:46:44.981745   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:46:47.497441   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:46:52.500105   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:46:52.500741   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:46:52.542423   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:46:52.542560   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:46:52.564602   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:46:52.564712   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:46:52.579043   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:46:52.579128   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:46:52.590497   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:46:52.590573   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:46:52.601182   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:46:52.601257   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:46:52.611660   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:46:52.611736   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:46:52.622257   13221 logs.go:282] 0 containers: []
	W1010 11:46:52.622270   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:46:52.622342   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:46:52.632890   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:46:52.632910   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:46:52.632917   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:46:52.637562   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:46:52.637571   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:46:52.651868   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:46:52.651881   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:46:52.669013   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:46:52.669022   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:46:52.695242   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:46:52.695251   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:46:52.707290   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:46:52.707301   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:46:52.723478   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:46:52.723487   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:46:52.744060   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:46:52.744070   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:46:52.778730   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:46:52.778738   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:46:52.812849   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:46:52.812858   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:46:52.824564   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:46:52.824575   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:46:52.836634   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:46:52.836645   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:46:52.848642   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:46:52.848656   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:46:52.863688   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:46:52.863697   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:46:52.879258   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:46:52.879268   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:46:55.394056   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:47:00.396921   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:47:00.397536   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:47:00.435848   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:47:00.436054   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:47:00.457114   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:47:00.457237   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:47:00.472535   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:47:00.472631   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:47:00.487847   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:47:00.487926   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:47:00.499521   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:47:00.499599   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:47:00.519038   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:47:00.519121   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:47:00.530507   13221 logs.go:282] 0 containers: []
	W1010 11:47:00.530520   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:47:00.530606   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:47:00.543882   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:47:00.543904   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:47:00.543910   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:47:00.588337   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:47:00.588348   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:47:00.603067   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:47:00.603083   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:47:00.616512   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:47:00.616523   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:47:00.628921   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:47:00.628933   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:47:00.641860   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:47:00.641873   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:47:00.654270   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:47:00.654280   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:47:00.670753   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:47:00.670767   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:47:00.689773   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:47:00.689785   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:47:00.702118   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:47:00.702131   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:47:00.737638   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:47:00.737653   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:47:00.742351   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:47:00.742358   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:47:00.757967   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:47:00.757979   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:47:00.770970   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:47:00.770983   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:47:00.789105   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:47:00.789118   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:47:03.315653   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:47:08.317934   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:47:08.318429   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:47:08.358015   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:47:08.358194   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:47:08.380859   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:47:08.380984   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:47:08.397673   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:47:08.397766   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:47:08.411107   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:47:08.411184   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:47:08.422055   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:47:08.422123   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:47:08.433188   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:47:08.433264   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:47:08.444130   13221 logs.go:282] 0 containers: []
	W1010 11:47:08.444143   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:47:08.444198   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:47:08.455034   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:47:08.455051   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:47:08.455057   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:47:08.467248   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:47:08.467261   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:47:08.478612   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:47:08.478622   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:47:08.513744   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:47:08.513752   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:47:08.525294   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:47:08.525305   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:47:08.538235   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:47:08.538246   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:47:08.577013   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:47:08.577026   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:47:08.588730   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:47:08.588743   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:47:08.600612   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:47:08.600626   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:47:08.623991   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:47:08.623998   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:47:08.628500   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:47:08.628508   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:47:08.642675   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:47:08.642688   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:47:08.656834   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:47:08.656843   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:47:08.672035   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:47:08.672049   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:47:08.689834   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:47:08.689844   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:47:11.202181   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:47:16.204882   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:47:16.205459   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:47:16.246291   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:47:16.246437   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:47:16.268248   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:47:16.268356   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:47:16.283961   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:47:16.284057   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:47:16.303765   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:47:16.303836   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:47:16.314444   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:47:16.314523   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:47:16.325835   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:47:16.325900   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:47:16.336964   13221 logs.go:282] 0 containers: []
	W1010 11:47:16.336975   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:47:16.337043   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:47:16.347654   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:47:16.347675   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:47:16.347680   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:47:16.359984   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:47:16.359998   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:47:16.371941   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:47:16.371955   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:47:16.383567   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:47:16.383580   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:47:16.399308   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:47:16.399321   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:47:16.412075   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:47:16.412086   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:47:16.447526   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:47:16.447537   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:47:16.482432   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:47:16.482450   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:47:16.510152   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:47:16.510167   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:47:16.530364   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:47:16.530375   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:47:16.554586   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:47:16.554597   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:47:16.559192   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:47:16.559198   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:47:16.573823   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:47:16.573833   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:47:16.596277   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:47:16.596286   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:47:16.614075   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:47:16.614085   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:47:19.129106   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:47:24.131988   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:47:24.132594   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:47:24.169485   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:47:24.169645   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:47:24.190283   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:47:24.190393   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:47:24.213246   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:47:24.213336   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:47:24.229887   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:47:24.229967   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:47:24.241089   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:47:24.241169   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:47:24.251868   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:47:24.251950   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:47:24.262023   13221 logs.go:282] 0 containers: []
	W1010 11:47:24.262034   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:47:24.262098   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:47:24.272724   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:47:24.272742   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:47:24.272747   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:47:24.287079   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:47:24.287091   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:47:24.304394   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:47:24.304405   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:47:24.308829   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:47:24.308838   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:47:24.380960   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:47:24.380970   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:47:24.393747   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:47:24.393761   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:47:24.406130   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:47:24.406144   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:47:24.421704   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:47:24.421714   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:47:24.444704   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:47:24.444712   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:47:24.456002   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:47:24.456015   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:47:24.468037   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:47:24.468046   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:47:24.479412   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:47:24.479424   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:47:24.513302   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:47:24.513309   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:47:24.527643   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:47:24.527656   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:47:24.547514   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:47:24.547526   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:47:27.061643   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:47:32.063843   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:47:32.064427   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1010 11:47:32.115108   13221 logs.go:282] 1 containers: [7b7718a86525]
	I1010 11:47:32.115251   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1010 11:47:32.134170   13221 logs.go:282] 1 containers: [7a56658a8548]
	I1010 11:47:32.134258   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1010 11:47:32.151605   13221 logs.go:282] 4 containers: [ff7b2dc74fc1 e92127f5f11d d5432a0dc833 a54944ee99d4]
	I1010 11:47:32.151677   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1010 11:47:32.163310   13221 logs.go:282] 1 containers: [1d5678ab4151]
	I1010 11:47:32.163391   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1010 11:47:32.174692   13221 logs.go:282] 1 containers: [e3a7bd4c3de4]
	I1010 11:47:32.174770   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1010 11:47:32.189553   13221 logs.go:282] 1 containers: [a06cdb2fe555]
	I1010 11:47:32.189631   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1010 11:47:32.199380   13221 logs.go:282] 0 containers: []
	W1010 11:47:32.199392   13221 logs.go:284] No container was found matching "kindnet"
	I1010 11:47:32.199449   13221 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1010 11:47:32.210368   13221 logs.go:282] 1 containers: [a7afb2ddb4b8]
	I1010 11:47:32.210386   13221 logs.go:123] Gathering logs for describe nodes ...
	I1010 11:47:32.210391   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 11:47:32.246321   13221 logs.go:123] Gathering logs for coredns [e92127f5f11d] ...
	I1010 11:47:32.246331   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e92127f5f11d"
	I1010 11:47:32.258067   13221 logs.go:123] Gathering logs for kube-scheduler [1d5678ab4151] ...
	I1010 11:47:32.258080   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d5678ab4151"
	I1010 11:47:32.281209   13221 logs.go:123] Gathering logs for kube-controller-manager [a06cdb2fe555] ...
	I1010 11:47:32.281221   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06cdb2fe555"
	I1010 11:47:32.298742   13221 logs.go:123] Gathering logs for storage-provisioner [a7afb2ddb4b8] ...
	I1010 11:47:32.298753   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7afb2ddb4b8"
	I1010 11:47:32.309984   13221 logs.go:123] Gathering logs for dmesg ...
	I1010 11:47:32.309994   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 11:47:32.314378   13221 logs.go:123] Gathering logs for etcd [7a56658a8548] ...
	I1010 11:47:32.314388   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a56658a8548"
	I1010 11:47:32.328000   13221 logs.go:123] Gathering logs for coredns [a54944ee99d4] ...
	I1010 11:47:32.328011   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a54944ee99d4"
	I1010 11:47:32.340108   13221 logs.go:123] Gathering logs for kubelet ...
	I1010 11:47:32.340121   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 11:47:32.372985   13221 logs.go:123] Gathering logs for kube-apiserver [7b7718a86525] ...
	I1010 11:47:32.372993   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b7718a86525"
	I1010 11:47:32.387053   13221 logs.go:123] Gathering logs for Docker ...
	I1010 11:47:32.387063   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1010 11:47:32.411962   13221 logs.go:123] Gathering logs for container status ...
	I1010 11:47:32.411968   13221 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 11:47:32.425181   13221 logs.go:123] Gathering logs for coredns [ff7b2dc74fc1] ...
	I1010 11:47:32.425192   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff7b2dc74fc1"
	I1010 11:47:32.437018   13221 logs.go:123] Gathering logs for coredns [d5432a0dc833] ...
	I1010 11:47:32.437030   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5432a0dc833"
	I1010 11:47:32.449099   13221 logs.go:123] Gathering logs for kube-proxy [e3a7bd4c3de4] ...
	I1010 11:47:32.449108   13221 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a7bd4c3de4"
	I1010 11:47:34.962834   13221 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1010 11:47:39.965474   13221 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1010 11:47:39.969615   13221 out.go:201] 
	W1010 11:47:39.980609   13221 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1010 11:47:39.980615   13221 out.go:270] * 
	* 
	W1010 11:47:39.981097   13221 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:47:39.995607   13221 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-616000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (576.61s)

                                                
                                    
x
+
TestPause/serial/Start (10.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-037000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-037000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.10123475s)

                                                
                                                
-- stdout --
	* [pause-037000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-037000" primary control-plane node in "pause-037000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-037000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-037000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-037000 -n pause-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-037000 -n pause-037000: exit status 7 (45.338542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-202000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-202000 --driver=qemu2 : exit status 80 (9.8407865s)

                                                
                                                
-- stdout --
	* [NoKubernetes-202000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-202000" primary control-plane node in "NoKubernetes-202000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-202000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-202000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-202000 -n NoKubernetes-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-202000 -n NoKubernetes-202000: exit status 7 (65.341584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-202000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-202000 --no-kubernetes --driver=qemu2 : exit status 80 (5.262292125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-202000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-202000
	* Restarting existing qemu2 VM for "NoKubernetes-202000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-202000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-202000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-202000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-202000 -n NoKubernetes-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-202000 -n NoKubernetes-202000: exit status 7 (71.645917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-202000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-202000 --no-kubernetes --driver=qemu2 : exit status 80 (5.244003833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-202000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-202000
	* Restarting existing qemu2 VM for "NoKubernetes-202000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-202000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-202000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-202000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-202000 -n NoKubernetes-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-202000 -n NoKubernetes-202000: exit status 7 (53.894208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-202000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-202000 --driver=qemu2 : exit status 80 (5.291078208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-202000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-202000
	* Restarting existing qemu2 VM for "NoKubernetes-202000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-202000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-202000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-202000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-202000 -n NoKubernetes-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-202000 -n NoKubernetes-202000: exit status 7 (58.842959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-202000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.794695792s)

                                                
                                                
-- stdout --
	* [auto-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-194000" primary control-plane node in "auto-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:46:01.381116   13411 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:46:01.381264   13411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:46:01.381268   13411 out.go:358] Setting ErrFile to fd 2...
	I1010 11:46:01.381270   13411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:46:01.381425   13411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:46:01.382555   13411 out.go:352] Setting JSON to false
	I1010 11:46:01.400503   13411 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8132,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:46:01.400570   13411 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:46:01.404098   13411 out.go:177] * [auto-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:46:01.412019   13411 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:46:01.412081   13411 notify.go:220] Checking for updates...
	I1010 11:46:01.419033   13411 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:46:01.422037   13411 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:46:01.425002   13411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:46:01.428081   13411 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:46:01.431019   13411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:46:01.434331   13411 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:46:01.434398   13411 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:46:01.434458   13411 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:46:01.438069   13411 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:46:01.444992   13411 start.go:297] selected driver: qemu2
	I1010 11:46:01.444998   13411 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:46:01.445003   13411 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:46:01.447459   13411 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:46:01.450169   13411 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:46:01.453097   13411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:46:01.453116   13411 cni.go:84] Creating CNI manager for ""
	I1010 11:46:01.453147   13411 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:46:01.453152   13411 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:46:01.453184   13411 start.go:340] cluster config:
	{Name:auto-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:46:01.457607   13411 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:46:01.466031   13411 out.go:177] * Starting "auto-194000" primary control-plane node in "auto-194000" cluster
	I1010 11:46:01.470015   13411 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:46:01.470032   13411 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:46:01.470042   13411 cache.go:56] Caching tarball of preloaded images
	I1010 11:46:01.470132   13411 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:46:01.470137   13411 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:46:01.470189   13411 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/auto-194000/config.json ...
	I1010 11:46:01.470208   13411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/auto-194000/config.json: {Name:mk9ecd4898efd7a0c3bfbfe6832994fe1d791011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:46:01.470528   13411 start.go:360] acquireMachinesLock for auto-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:46:01.470570   13411 start.go:364] duration metric: took 36.708µs to acquireMachinesLock for "auto-194000"
	I1010 11:46:01.470582   13411 start.go:93] Provisioning new machine with config: &{Name:auto-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:46:01.470604   13411 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:46:01.474019   13411 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:46:01.488583   13411 start.go:159] libmachine.API.Create for "auto-194000" (driver="qemu2")
	I1010 11:46:01.488607   13411 client.go:168] LocalClient.Create starting
	I1010 11:46:01.488679   13411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:46:01.488716   13411 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:01.488725   13411 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:01.488762   13411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:46:01.488793   13411 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:01.488800   13411 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:01.489198   13411 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:46:01.647700   13411 main.go:141] libmachine: Creating SSH key...
	I1010 11:46:01.698734   13411 main.go:141] libmachine: Creating Disk image...
	I1010 11:46:01.698742   13411 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:46:01.698943   13411 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2
	I1010 11:46:01.708833   13411 main.go:141] libmachine: STDOUT: 
	I1010 11:46:01.708852   13411 main.go:141] libmachine: STDERR: 
	I1010 11:46:01.708901   13411 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2 +20000M
	I1010 11:46:01.717540   13411 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:46:01.717556   13411 main.go:141] libmachine: STDERR: 
	I1010 11:46:01.717568   13411 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2
	I1010 11:46:01.717573   13411 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:46:01.717588   13411 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:46:01.717621   13411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8e:9f:e1:dd:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2
	I1010 11:46:01.719505   13411 main.go:141] libmachine: STDOUT: 
	I1010 11:46:01.719521   13411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:46:01.719542   13411 client.go:171] duration metric: took 230.981458ms to LocalClient.Create
	I1010 11:46:03.721210   13411 start.go:128] duration metric: took 2.251071167s to createHost
	I1010 11:46:03.721225   13411 start.go:83] releasing machines lock for "auto-194000", held for 2.251125417s
	W1010 11:46:03.721248   13411 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:03.730797   13411 out.go:177] * Deleting "auto-194000" in qemu2 ...
	W1010 11:46:03.740878   13411 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:03.740889   13411 start.go:729] Will try again in 5 seconds ...
	I1010 11:46:08.742318   13411 start.go:360] acquireMachinesLock for auto-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:46:08.742886   13411 start.go:364] duration metric: took 465µs to acquireMachinesLock for "auto-194000"
	I1010 11:46:08.742996   13411 start.go:93] Provisioning new machine with config: &{Name:auto-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:46:08.743171   13411 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:46:08.749885   13411 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:46:08.789004   13411 start.go:159] libmachine.API.Create for "auto-194000" (driver="qemu2")
	I1010 11:46:08.789054   13411 client.go:168] LocalClient.Create starting
	I1010 11:46:08.789221   13411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:46:08.789307   13411 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:08.789327   13411 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:08.789389   13411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:46:08.789438   13411 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:08.789455   13411 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:08.790101   13411 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:46:08.954885   13411 main.go:141] libmachine: Creating SSH key...
	I1010 11:46:09.078457   13411 main.go:141] libmachine: Creating Disk image...
	I1010 11:46:09.078466   13411 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:46:09.078668   13411 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2
	I1010 11:46:09.089016   13411 main.go:141] libmachine: STDOUT: 
	I1010 11:46:09.089037   13411 main.go:141] libmachine: STDERR: 
	I1010 11:46:09.089101   13411 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2 +20000M
	I1010 11:46:09.097650   13411 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:46:09.097666   13411 main.go:141] libmachine: STDERR: 
	I1010 11:46:09.097680   13411 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2
	I1010 11:46:09.097686   13411 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:46:09.097695   13411 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:46:09.097733   13411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:22:e1:2b:3d:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/auto-194000/disk.qcow2
	I1010 11:46:09.099662   13411 main.go:141] libmachine: STDOUT: 
	I1010 11:46:09.099680   13411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:46:09.099694   13411 client.go:171] duration metric: took 310.6745ms to LocalClient.Create
	I1010 11:46:11.101526   13411 start.go:128] duration metric: took 2.358659833s to createHost
	I1010 11:46:11.101570   13411 start.go:83] releasing machines lock for "auto-194000", held for 2.35897725s
	W1010 11:46:11.101755   13411 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:11.114822   13411 out.go:201] 
	W1010 11:46:11.117913   13411 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:46:11.117941   13411 out.go:270] * 
	* 
	W1010 11:46:11.119786   13411 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:46:11.132831   13411 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.849432292s)

                                                
                                                
-- stdout --
	* [kindnet-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-194000" primary control-plane node in "kindnet-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:46:13.565658   13520 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:46:13.565811   13520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:46:13.565815   13520 out.go:358] Setting ErrFile to fd 2...
	I1010 11:46:13.565817   13520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:46:13.565956   13520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:46:13.567169   13520 out.go:352] Setting JSON to false
	I1010 11:46:13.585409   13520 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8144,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:46:13.585514   13520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:46:13.590049   13520 out.go:177] * [kindnet-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:46:13.597129   13520 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:46:13.597191   13520 notify.go:220] Checking for updates...
	I1010 11:46:13.604099   13520 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:46:13.607068   13520 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:46:13.610057   13520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:46:13.613040   13520 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:46:13.616119   13520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:46:13.619361   13520 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:46:13.619435   13520 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:46:13.619485   13520 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:46:13.623030   13520 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:46:13.629992   13520 start.go:297] selected driver: qemu2
	I1010 11:46:13.629999   13520 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:46:13.630006   13520 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:46:13.632365   13520 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:46:13.635057   13520 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:46:13.638122   13520 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:46:13.638141   13520 cni.go:84] Creating CNI manager for "kindnet"
	I1010 11:46:13.638144   13520 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 11:46:13.638178   13520 start.go:340] cluster config:
	{Name:kindnet-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:46:13.642610   13520 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:46:13.651045   13520 out.go:177] * Starting "kindnet-194000" primary control-plane node in "kindnet-194000" cluster
	I1010 11:46:13.654910   13520 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:46:13.654926   13520 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:46:13.654936   13520 cache.go:56] Caching tarball of preloaded images
	I1010 11:46:13.655019   13520 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:46:13.655024   13520 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:46:13.655091   13520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/kindnet-194000/config.json ...
	I1010 11:46:13.655103   13520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/kindnet-194000/config.json: {Name:mk8543bb718a69e30f5807032ff55bfad0d0e1bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:46:13.655333   13520 start.go:360] acquireMachinesLock for kindnet-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:46:13.655377   13520 start.go:364] duration metric: took 38.625µs to acquireMachinesLock for "kindnet-194000"
	I1010 11:46:13.655392   13520 start.go:93] Provisioning new machine with config: &{Name:kindnet-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:46:13.655426   13520 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:46:13.662941   13520 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:46:13.679070   13520 start.go:159] libmachine.API.Create for "kindnet-194000" (driver="qemu2")
	I1010 11:46:13.679107   13520 client.go:168] LocalClient.Create starting
	I1010 11:46:13.679171   13520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:46:13.679210   13520 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:13.679223   13520 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:13.679261   13520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:46:13.679291   13520 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:13.679300   13520 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:13.679662   13520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:46:13.836688   13520 main.go:141] libmachine: Creating SSH key...
	I1010 11:46:13.966147   13520 main.go:141] libmachine: Creating Disk image...
	I1010 11:46:13.966154   13520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:46:13.966346   13520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2
	I1010 11:46:13.976386   13520 main.go:141] libmachine: STDOUT: 
	I1010 11:46:13.976403   13520 main.go:141] libmachine: STDERR: 
	I1010 11:46:13.976458   13520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2 +20000M
	I1010 11:46:13.985053   13520 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:46:13.985065   13520 main.go:141] libmachine: STDERR: 
	I1010 11:46:13.985076   13520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2
	I1010 11:46:13.985081   13520 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:46:13.985092   13520 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:46:13.985117   13520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:3c:c4:35:b3:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2
	I1010 11:46:13.986906   13520 main.go:141] libmachine: STDOUT: 
	I1010 11:46:13.986920   13520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:46:13.986953   13520 client.go:171] duration metric: took 307.873ms to LocalClient.Create
	I1010 11:46:15.988865   13520 start.go:128] duration metric: took 2.333666458s to createHost
	I1010 11:46:15.988896   13520 start.go:83] releasing machines lock for "kindnet-194000", held for 2.333751792s
	W1010 11:46:15.988918   13520 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:16.002768   13520 out.go:177] * Deleting "kindnet-194000" in qemu2 ...
	W1010 11:46:16.017926   13520 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:16.017938   13520 start.go:729] Will try again in 5 seconds ...
	I1010 11:46:21.019640   13520 start.go:360] acquireMachinesLock for kindnet-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:46:21.019900   13520 start.go:364] duration metric: took 222.833µs to acquireMachinesLock for "kindnet-194000"
	I1010 11:46:21.019933   13520 start.go:93] Provisioning new machine with config: &{Name:kindnet-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:46:21.020043   13520 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:46:21.024378   13520 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:46:21.050677   13520 start.go:159] libmachine.API.Create for "kindnet-194000" (driver="qemu2")
	I1010 11:46:21.050709   13520 client.go:168] LocalClient.Create starting
	I1010 11:46:21.050801   13520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:46:21.050857   13520 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:21.050871   13520 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:21.050913   13520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:46:21.050952   13520 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:21.050962   13520 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:21.051452   13520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:46:21.211309   13520 main.go:141] libmachine: Creating SSH key...
	I1010 11:46:21.315995   13520 main.go:141] libmachine: Creating Disk image...
	I1010 11:46:21.316005   13520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:46:21.316225   13520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2
	I1010 11:46:21.327385   13520 main.go:141] libmachine: STDOUT: 
	I1010 11:46:21.327408   13520 main.go:141] libmachine: STDERR: 
	I1010 11:46:21.327470   13520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2 +20000M
	I1010 11:46:21.337605   13520 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:46:21.337629   13520 main.go:141] libmachine: STDERR: 
	I1010 11:46:21.337640   13520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2
	I1010 11:46:21.337643   13520 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:46:21.337652   13520 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:46:21.337678   13520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:73:92:f7:b3:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kindnet-194000/disk.qcow2
	I1010 11:46:21.339925   13520 main.go:141] libmachine: STDOUT: 
	I1010 11:46:21.339938   13520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:46:21.339955   13520 client.go:171] duration metric: took 289.258875ms to LocalClient.Create
	I1010 11:46:23.342039   13520 start.go:128] duration metric: took 2.322119583s to createHost
	I1010 11:46:23.342170   13520 start.go:83] releasing machines lock for "kindnet-194000", held for 2.322400958s
	W1010 11:46:23.342562   13520 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:23.351590   13520 out.go:201] 
	W1010 11:46:23.356588   13520 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:46:23.356614   13520 out.go:270] * 
	* 
	W1010 11:46:23.359409   13520 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:46:23.367547   13520 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.869063708s)

                                                
                                                
-- stdout --
	* [calico-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-194000" primary control-plane node in "calico-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:46:25.879766   13633 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:46:25.879922   13633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:46:25.879926   13633 out.go:358] Setting ErrFile to fd 2...
	I1010 11:46:25.879928   13633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:46:25.880082   13633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:46:25.881303   13633 out.go:352] Setting JSON to false
	I1010 11:46:25.900146   13633 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8156,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:46:25.900216   13633 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:46:25.905567   13633 out.go:177] * [calico-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:46:25.913587   13633 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:46:25.913644   13633 notify.go:220] Checking for updates...
	I1010 11:46:25.919534   13633 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:46:25.922565   13633 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:46:25.923874   13633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:46:25.926544   13633 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:46:25.929549   13633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:46:25.932951   13633 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:46:25.933024   13633 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:46:25.933068   13633 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:46:25.937521   13633 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:46:25.944592   13633 start.go:297] selected driver: qemu2
	I1010 11:46:25.944600   13633 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:46:25.944607   13633 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:46:25.946992   13633 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:46:25.949501   13633 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:46:25.952608   13633 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:46:25.952624   13633 cni.go:84] Creating CNI manager for "calico"
	I1010 11:46:25.952627   13633 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1010 11:46:25.952654   13633 start.go:340] cluster config:
	{Name:calico-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:46:25.956859   13633 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:46:25.965568   13633 out.go:177] * Starting "calico-194000" primary control-plane node in "calico-194000" cluster
	I1010 11:46:25.969594   13633 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:46:25.969612   13633 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:46:25.969623   13633 cache.go:56] Caching tarball of preloaded images
	I1010 11:46:25.969707   13633 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:46:25.969712   13633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:46:25.969763   13633 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/calico-194000/config.json ...
	I1010 11:46:25.969775   13633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/calico-194000/config.json: {Name:mk8e6ea9f785e1ab553b3684773e22c405c480ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:46:25.970023   13633 start.go:360] acquireMachinesLock for calico-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:46:25.970068   13633 start.go:364] duration metric: took 39.5µs to acquireMachinesLock for "calico-194000"
	I1010 11:46:25.970080   13633 start.go:93] Provisioning new machine with config: &{Name:calico-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:46:25.970111   13633 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:46:25.974609   13633 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:46:25.990210   13633 start.go:159] libmachine.API.Create for "calico-194000" (driver="qemu2")
	I1010 11:46:25.990238   13633 client.go:168] LocalClient.Create starting
	I1010 11:46:25.990303   13633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:46:25.990340   13633 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:25.990353   13633 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:25.990393   13633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:46:25.990422   13633 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:25.990430   13633 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:25.990814   13633 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:46:26.146146   13633 main.go:141] libmachine: Creating SSH key...
	I1010 11:46:26.277937   13633 main.go:141] libmachine: Creating Disk image...
	I1010 11:46:26.277945   13633 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:46:26.278128   13633 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2
	I1010 11:46:26.287956   13633 main.go:141] libmachine: STDOUT: 
	I1010 11:46:26.287976   13633 main.go:141] libmachine: STDERR: 
	I1010 11:46:26.288034   13633 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2 +20000M
	I1010 11:46:26.296601   13633 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:46:26.296617   13633 main.go:141] libmachine: STDERR: 
	I1010 11:46:26.296634   13633 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2
	I1010 11:46:26.296641   13633 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:46:26.296652   13633 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:46:26.296680   13633 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:04:d5:ba:eb:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2
	I1010 11:46:26.298464   13633 main.go:141] libmachine: STDOUT: 
	I1010 11:46:26.298484   13633 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:46:26.298504   13633 client.go:171] duration metric: took 308.27675ms to LocalClient.Create
	I1010 11:46:28.300661   13633 start.go:128] duration metric: took 2.3306415s to createHost
	I1010 11:46:28.300790   13633 start.go:83] releasing machines lock for "calico-194000", held for 2.33080225s
	W1010 11:46:28.300855   13633 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:28.311918   13633 out.go:177] * Deleting "calico-194000" in qemu2 ...
	W1010 11:46:28.339660   13633 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:28.339696   13633 start.go:729] Will try again in 5 seconds ...
	I1010 11:46:33.341633   13633 start.go:360] acquireMachinesLock for calico-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:46:33.342206   13633 start.go:364] duration metric: took 486.292µs to acquireMachinesLock for "calico-194000"
	I1010 11:46:33.342307   13633 start.go:93] Provisioning new machine with config: &{Name:calico-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:46:33.342526   13633 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:46:33.347645   13633 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:46:33.391960   13633 start.go:159] libmachine.API.Create for "calico-194000" (driver="qemu2")
	I1010 11:46:33.392015   13633 client.go:168] LocalClient.Create starting
	I1010 11:46:33.392165   13633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:46:33.392255   13633 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:33.392272   13633 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:33.392330   13633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:46:33.392402   13633 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:33.392413   13633 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:33.392972   13633 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:46:33.558155   13633 main.go:141] libmachine: Creating SSH key...
	I1010 11:46:33.645654   13633 main.go:141] libmachine: Creating Disk image...
	I1010 11:46:33.645666   13633 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:46:33.645852   13633 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2
	I1010 11:46:33.655666   13633 main.go:141] libmachine: STDOUT: 
	I1010 11:46:33.655688   13633 main.go:141] libmachine: STDERR: 
	I1010 11:46:33.655749   13633 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2 +20000M
	I1010 11:46:33.664362   13633 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:46:33.664376   13633 main.go:141] libmachine: STDERR: 
	I1010 11:46:33.664394   13633 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2
	I1010 11:46:33.664410   13633 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:46:33.664420   13633 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:46:33.664452   13633 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:cb:38:3a:3a:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/calico-194000/disk.qcow2
	I1010 11:46:33.666279   13633 main.go:141] libmachine: STDOUT: 
	I1010 11:46:33.666293   13633 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:46:33.666307   13633 client.go:171] duration metric: took 274.295792ms to LocalClient.Create
	I1010 11:46:35.668534   13633 start.go:128] duration metric: took 2.325975291s to createHost
	I1010 11:46:35.668629   13633 start.go:83] releasing machines lock for "calico-194000", held for 2.326489s
	W1010 11:46:35.668987   13633 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:35.683730   13633 out.go:201] 
	W1010 11:46:35.686881   13633 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:46:35.686930   13633 out.go:270] * 
	* 
	W1010 11:46:35.689000   13633 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:46:35.700703   13633 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.806041125s)

                                                
                                                
-- stdout --
	* [custom-flannel-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-194000" primary control-plane node in "custom-flannel-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:46:38.331070   13753 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:46:38.331258   13753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:46:38.331264   13753 out.go:358] Setting ErrFile to fd 2...
	I1010 11:46:38.331266   13753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:46:38.331425   13753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:46:38.332570   13753 out.go:352] Setting JSON to false
	I1010 11:46:38.350519   13753 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8169,"bootTime":1728577829,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:46:38.350623   13753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:46:38.356337   13753 out.go:177] * [custom-flannel-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:46:38.364291   13753 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:46:38.364345   13753 notify.go:220] Checking for updates...
	I1010 11:46:38.371269   13753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:46:38.374253   13753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:46:38.377247   13753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:46:38.380304   13753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:46:38.383328   13753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:46:38.386734   13753 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:46:38.386806   13753 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:46:38.386855   13753 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:46:38.391256   13753 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:46:38.402249   13753 start.go:297] selected driver: qemu2
	I1010 11:46:38.402254   13753 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:46:38.402260   13753 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:46:38.404704   13753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:46:38.408312   13753 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:46:38.411393   13753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:46:38.411409   13753 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1010 11:46:38.411415   13753 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1010 11:46:38.411446   13753 start.go:340] cluster config:
	{Name:custom-flannel-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:46:38.416057   13753 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:46:38.424237   13753 out.go:177] * Starting "custom-flannel-194000" primary control-plane node in "custom-flannel-194000" cluster
	I1010 11:46:38.428249   13753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:46:38.428265   13753 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:46:38.428271   13753 cache.go:56] Caching tarball of preloaded images
	I1010 11:46:38.428339   13753 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:46:38.428345   13753 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:46:38.428404   13753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/custom-flannel-194000/config.json ...
	I1010 11:46:38.428415   13753 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/custom-flannel-194000/config.json: {Name:mk9dcc93a21c52aa125d1a10f905e30ca6c25bd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:46:38.428746   13753 start.go:360] acquireMachinesLock for custom-flannel-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:46:38.428793   13753 start.go:364] duration metric: took 39.791µs to acquireMachinesLock for "custom-flannel-194000"
	I1010 11:46:38.428813   13753 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:46:38.428842   13753 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:46:38.432300   13753 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:46:38.446960   13753 start.go:159] libmachine.API.Create for "custom-flannel-194000" (driver="qemu2")
	I1010 11:46:38.446988   13753 client.go:168] LocalClient.Create starting
	I1010 11:46:38.447067   13753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:46:38.447103   13753 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:38.447116   13753 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:38.447156   13753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:46:38.447184   13753 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:38.447196   13753 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:38.447566   13753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:46:38.603219   13753 main.go:141] libmachine: Creating SSH key...
	I1010 11:46:38.672080   13753 main.go:141] libmachine: Creating Disk image...
	I1010 11:46:38.672086   13753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:46:38.672285   13753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2
	I1010 11:46:38.682183   13753 main.go:141] libmachine: STDOUT: 
	I1010 11:46:38.682205   13753 main.go:141] libmachine: STDERR: 
	I1010 11:46:38.682268   13753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2 +20000M
	I1010 11:46:38.690769   13753 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:46:38.690784   13753 main.go:141] libmachine: STDERR: 
	I1010 11:46:38.690805   13753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2
	I1010 11:46:38.690811   13753 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:46:38.690824   13753 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:46:38.690861   13753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:14:06:3b:97:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2
	I1010 11:46:38.692635   13753 main.go:141] libmachine: STDOUT: 
	I1010 11:46:38.692651   13753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:46:38.692672   13753 client.go:171] duration metric: took 245.685584ms to LocalClient.Create
	I1010 11:46:40.694934   13753 start.go:128] duration metric: took 2.266127334s to createHost
	I1010 11:46:40.695027   13753 start.go:83] releasing machines lock for "custom-flannel-194000", held for 2.26629375s
	W1010 11:46:40.695825   13753 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:40.711017   13753 out.go:177] * Deleting "custom-flannel-194000" in qemu2 ...
	W1010 11:46:40.735415   13753 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:40.735448   13753 start.go:729] Will try again in 5 seconds ...
	I1010 11:46:45.737600   13753 start.go:360] acquireMachinesLock for custom-flannel-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:46:45.738305   13753 start.go:364] duration metric: took 597.584µs to acquireMachinesLock for "custom-flannel-194000"
	I1010 11:46:45.738381   13753 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:46:45.738663   13753 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:46:45.744413   13753 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:46:45.795012   13753 start.go:159] libmachine.API.Create for "custom-flannel-194000" (driver="qemu2")
	I1010 11:46:45.795094   13753 client.go:168] LocalClient.Create starting
	I1010 11:46:45.795257   13753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:46:45.795371   13753 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:45.795395   13753 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:45.795469   13753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:46:45.795529   13753 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:45.795543   13753 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:45.796287   13753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:46:45.966086   13753 main.go:141] libmachine: Creating SSH key...
	I1010 11:46:46.040685   13753 main.go:141] libmachine: Creating Disk image...
	I1010 11:46:46.040692   13753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:46:46.040896   13753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2
	I1010 11:46:46.050719   13753 main.go:141] libmachine: STDOUT: 
	I1010 11:46:46.050737   13753 main.go:141] libmachine: STDERR: 
	I1010 11:46:46.050786   13753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2 +20000M
	I1010 11:46:46.059443   13753 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:46:46.059456   13753 main.go:141] libmachine: STDERR: 
	I1010 11:46:46.059467   13753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2
	I1010 11:46:46.059474   13753 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:46:46.059481   13753 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:46:46.059516   13753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:44:00:ee:f2:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/custom-flannel-194000/disk.qcow2
	I1010 11:46:46.061343   13753 main.go:141] libmachine: STDOUT: 
	I1010 11:46:46.061356   13753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:46:46.061369   13753 client.go:171] duration metric: took 266.273708ms to LocalClient.Create
	I1010 11:46:48.063547   13753 start.go:128] duration metric: took 2.324886542s to createHost
	I1010 11:46:48.063658   13753 start.go:83] releasing machines lock for "custom-flannel-194000", held for 2.325377125s
	W1010 11:46:48.064098   13753 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:48.073742   13753 out.go:201] 
	W1010 11:46:48.079869   13753 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:46:48.079910   13753 out.go:270] * 
	* 
	W1010 11:46:48.081988   13753 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:46:48.090760   13753 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.89502525s)

                                                
                                                
-- stdout --
	* [false-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-194000" primary control-plane node in "false-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:46:50.643957   13870 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:46:50.644125   13870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:46:50.644129   13870 out.go:358] Setting ErrFile to fd 2...
	I1010 11:46:50.644132   13870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:46:50.644271   13870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:46:50.645443   13870 out.go:352] Setting JSON to false
	I1010 11:46:50.663295   13870 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8181,"bootTime":1728577829,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:46:50.663374   13870 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:46:50.669341   13870 out.go:177] * [false-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:46:50.677325   13870 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:46:50.677393   13870 notify.go:220] Checking for updates...
	I1010 11:46:50.684278   13870 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:46:50.688079   13870 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:46:50.691343   13870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:46:50.694355   13870 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:46:50.697260   13870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:46:50.700757   13870 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:46:50.700844   13870 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:46:50.700884   13870 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:46:50.705247   13870 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:46:50.712254   13870 start.go:297] selected driver: qemu2
	I1010 11:46:50.712260   13870 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:46:50.712273   13870 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:46:50.714898   13870 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:46:50.718234   13870 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:46:50.721328   13870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:46:50.721346   13870 cni.go:84] Creating CNI manager for "false"
	I1010 11:46:50.721375   13870 start.go:340] cluster config:
	{Name:false-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:46:50.726100   13870 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:46:50.734257   13870 out.go:177] * Starting "false-194000" primary control-plane node in "false-194000" cluster
	I1010 11:46:50.738253   13870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:46:50.738271   13870 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:46:50.738282   13870 cache.go:56] Caching tarball of preloaded images
	I1010 11:46:50.738397   13870 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:46:50.738403   13870 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:46:50.738463   13870 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/false-194000/config.json ...
	I1010 11:46:50.738474   13870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/false-194000/config.json: {Name:mk4380b031690ea4ee3dd0719b1c8db356c986e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:46:50.738724   13870 start.go:360] acquireMachinesLock for false-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:46:50.738771   13870 start.go:364] duration metric: took 41.709µs to acquireMachinesLock for "false-194000"
	I1010 11:46:50.738785   13870 start.go:93] Provisioning new machine with config: &{Name:false-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:46:50.738816   13870 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:46:50.743295   13870 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:46:50.759080   13870 start.go:159] libmachine.API.Create for "false-194000" (driver="qemu2")
	I1010 11:46:50.759104   13870 client.go:168] LocalClient.Create starting
	I1010 11:46:50.759172   13870 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:46:50.759210   13870 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:50.759220   13870 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:50.759263   13870 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:46:50.759298   13870 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:50.759306   13870 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:50.759672   13870 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:46:50.918296   13870 main.go:141] libmachine: Creating SSH key...
	I1010 11:46:51.059240   13870 main.go:141] libmachine: Creating Disk image...
	I1010 11:46:51.059247   13870 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:46:51.059436   13870 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2
	I1010 11:46:51.069400   13870 main.go:141] libmachine: STDOUT: 
	I1010 11:46:51.069437   13870 main.go:141] libmachine: STDERR: 
	I1010 11:46:51.069495   13870 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2 +20000M
	I1010 11:46:51.078443   13870 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:46:51.078460   13870 main.go:141] libmachine: STDERR: 
	I1010 11:46:51.078477   13870 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2
	I1010 11:46:51.078482   13870 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:46:51.078501   13870 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:46:51.078532   13870 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:09:a6:53:13:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2
	I1010 11:46:51.080439   13870 main.go:141] libmachine: STDOUT: 
	I1010 11:46:51.080456   13870 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:46:51.080475   13870 client.go:171] duration metric: took 321.37225ms to LocalClient.Create
	I1010 11:46:53.082551   13870 start.go:128] duration metric: took 2.343734583s to createHost
	I1010 11:46:53.082575   13870 start.go:83] releasing machines lock for "false-194000", held for 2.34384725s
	W1010 11:46:53.082603   13870 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:53.092389   13870 out.go:177] * Deleting "false-194000" in qemu2 ...
	W1010 11:46:53.107945   13870 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:46:53.107959   13870 start.go:729] Will try again in 5 seconds ...
	I1010 11:46:58.110020   13870 start.go:360] acquireMachinesLock for false-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:46:58.110378   13870 start.go:364] duration metric: took 290.125µs to acquireMachinesLock for "false-194000"
	I1010 11:46:58.110444   13870 start.go:93] Provisioning new machine with config: &{Name:false-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:46:58.110569   13870 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:46:58.119170   13870 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:46:58.164553   13870 start.go:159] libmachine.API.Create for "false-194000" (driver="qemu2")
	I1010 11:46:58.164606   13870 client.go:168] LocalClient.Create starting
	I1010 11:46:58.164734   13870 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:46:58.164818   13870 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:58.164837   13870 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:58.164904   13870 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:46:58.164969   13870 main.go:141] libmachine: Decoding PEM data...
	I1010 11:46:58.164981   13870 main.go:141] libmachine: Parsing certificate...
	I1010 11:46:58.165517   13870 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:46:58.330619   13870 main.go:141] libmachine: Creating SSH key...
	I1010 11:46:58.449571   13870 main.go:141] libmachine: Creating Disk image...
	I1010 11:46:58.449578   13870 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:46:58.449775   13870 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2
	I1010 11:46:58.460006   13870 main.go:141] libmachine: STDOUT: 
	I1010 11:46:58.460022   13870 main.go:141] libmachine: STDERR: 
	I1010 11:46:58.460093   13870 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2 +20000M
	I1010 11:46:58.468845   13870 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:46:58.468860   13870 main.go:141] libmachine: STDERR: 
	I1010 11:46:58.468877   13870 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2
	I1010 11:46:58.468883   13870 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:46:58.468895   13870 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:46:58.468930   13870 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:90:ad:65:04:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/false-194000/disk.qcow2
	I1010 11:46:58.470823   13870 main.go:141] libmachine: STDOUT: 
	I1010 11:46:58.470839   13870 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:46:58.470855   13870 client.go:171] duration metric: took 306.249167ms to LocalClient.Create
	I1010 11:47:00.472570   13870 start.go:128] duration metric: took 2.362033167s to createHost
	I1010 11:47:00.472608   13870 start.go:83] releasing machines lock for "false-194000", held for 2.362260041s
	W1010 11:47:00.472692   13870 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:00.483927   13870 out.go:201] 
	W1010 11:47:00.488016   13870 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:47:00.488021   13870 out.go:270] * 
	* 
	W1010 11:47:00.488518   13870 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:47:00.495952   13870 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.084853917s)

                                                
                                                
-- stdout --
	* [enable-default-cni-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-194000" primary control-plane node in "enable-default-cni-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:47:02.837117   13982 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:47:02.837257   13982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:47:02.837260   13982 out.go:358] Setting ErrFile to fd 2...
	I1010 11:47:02.837263   13982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:47:02.837398   13982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:47:02.838563   13982 out.go:352] Setting JSON to false
	I1010 11:47:02.856444   13982 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8193,"bootTime":1728577829,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:47:02.856519   13982 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:47:02.862518   13982 out.go:177] * [enable-default-cni-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:47:02.870457   13982 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:47:02.870492   13982 notify.go:220] Checking for updates...
	I1010 11:47:02.877496   13982 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:47:02.883514   13982 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:47:02.887453   13982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:47:02.890490   13982 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:47:02.896484   13982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:47:02.900607   13982 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:47:02.900683   13982 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:47:02.900733   13982 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:47:02.905438   13982 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:47:02.912297   13982 start.go:297] selected driver: qemu2
	I1010 11:47:02.912303   13982 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:47:02.912308   13982 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:47:02.914771   13982 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:47:02.918388   13982 out.go:177] * Automatically selected the socket_vmnet network
	E1010 11:47:02.921517   13982 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1010 11:47:02.921529   13982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:47:02.921544   13982 cni.go:84] Creating CNI manager for "bridge"
	I1010 11:47:02.921548   13982 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:47:02.921571   13982 start.go:340] cluster config:
	{Name:enable-default-cni-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:47:02.926077   13982 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:47:02.930485   13982 out.go:177] * Starting "enable-default-cni-194000" primary control-plane node in "enable-default-cni-194000" cluster
	I1010 11:47:02.938481   13982 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:47:02.938497   13982 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:47:02.938503   13982 cache.go:56] Caching tarball of preloaded images
	I1010 11:47:02.938573   13982 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:47:02.938578   13982 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:47:02.938634   13982 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/enable-default-cni-194000/config.json ...
	I1010 11:47:02.938644   13982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/enable-default-cni-194000/config.json: {Name:mk43cf7d508ad4cdf39bbe277a9a9d404b7cf2de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:47:02.938876   13982 start.go:360] acquireMachinesLock for enable-default-cni-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:47:02.938922   13982 start.go:364] duration metric: took 36.75µs to acquireMachinesLock for "enable-default-cni-194000"
	I1010 11:47:02.938935   13982 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:47:02.938973   13982 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:47:02.946433   13982 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:47:02.961420   13982 start.go:159] libmachine.API.Create for "enable-default-cni-194000" (driver="qemu2")
	I1010 11:47:02.961449   13982 client.go:168] LocalClient.Create starting
	I1010 11:47:02.961519   13982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:47:02.961561   13982 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:02.961573   13982 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:02.961615   13982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:47:02.961644   13982 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:02.961651   13982 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:02.962013   13982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:47:03.163668   13982 main.go:141] libmachine: Creating SSH key...
	I1010 11:47:03.425160   13982 main.go:141] libmachine: Creating Disk image...
	I1010 11:47:03.425169   13982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:47:03.425409   13982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2
	I1010 11:47:03.436165   13982 main.go:141] libmachine: STDOUT: 
	I1010 11:47:03.436186   13982 main.go:141] libmachine: STDERR: 
	I1010 11:47:03.436250   13982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2 +20000M
	I1010 11:47:03.444850   13982 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:47:03.444866   13982 main.go:141] libmachine: STDERR: 
	I1010 11:47:03.444887   13982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2
	I1010 11:47:03.444893   13982 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:47:03.444905   13982 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:47:03.444935   13982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:d8:7b:35:56:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2
	I1010 11:47:03.446780   13982 main.go:141] libmachine: STDOUT: 
	I1010 11:47:03.446795   13982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:47:03.446813   13982 client.go:171] duration metric: took 485.365875ms to LocalClient.Create
	I1010 11:47:05.448882   13982 start.go:128] duration metric: took 2.509937666s to createHost
	I1010 11:47:05.448923   13982 start.go:83] releasing machines lock for "enable-default-cni-194000", held for 2.51003525s
	W1010 11:47:05.448952   13982 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:05.459367   13982 out.go:177] * Deleting "enable-default-cni-194000" in qemu2 ...
	W1010 11:47:05.482007   13982 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:05.482021   13982 start.go:729] Will try again in 5 seconds ...
	I1010 11:47:10.484255   13982 start.go:360] acquireMachinesLock for enable-default-cni-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:47:10.484915   13982 start.go:364] duration metric: took 511.208µs to acquireMachinesLock for "enable-default-cni-194000"
	I1010 11:47:10.484984   13982 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:47:10.485254   13982 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:47:10.494822   13982 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:47:10.539079   13982 start.go:159] libmachine.API.Create for "enable-default-cni-194000" (driver="qemu2")
	I1010 11:47:10.539140   13982 client.go:168] LocalClient.Create starting
	I1010 11:47:10.539294   13982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:47:10.539382   13982 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:10.539407   13982 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:10.539480   13982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:47:10.539541   13982 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:10.539556   13982 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:10.540210   13982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:47:10.705658   13982 main.go:141] libmachine: Creating SSH key...
	I1010 11:47:10.826890   13982 main.go:141] libmachine: Creating Disk image...
	I1010 11:47:10.826905   13982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:47:10.827104   13982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2
	I1010 11:47:10.836911   13982 main.go:141] libmachine: STDOUT: 
	I1010 11:47:10.836935   13982 main.go:141] libmachine: STDERR: 
	I1010 11:47:10.837003   13982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2 +20000M
	I1010 11:47:10.845406   13982 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:47:10.845421   13982 main.go:141] libmachine: STDERR: 
	I1010 11:47:10.845434   13982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2
	I1010 11:47:10.845440   13982 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:47:10.845450   13982 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:47:10.845483   13982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:03:ee:63:86:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/enable-default-cni-194000/disk.qcow2
	I1010 11:47:10.847275   13982 main.go:141] libmachine: STDOUT: 
	I1010 11:47:10.847291   13982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:47:10.847305   13982 client.go:171] duration metric: took 308.164084ms to LocalClient.Create
	I1010 11:47:12.849467   13982 start.go:128] duration metric: took 2.364185208s to createHost
	I1010 11:47:12.849553   13982 start.go:83] releasing machines lock for "enable-default-cni-194000", held for 2.3646465s
	W1010 11:47:12.849928   13982 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:12.860464   13982 out.go:201] 
	W1010 11:47:12.863558   13982 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:47:12.863601   13982 out.go:270] * 
	* 
	W1010 11:47:12.866349   13982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:47:12.875431   13982 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.8923875s)

                                                
                                                
-- stdout --
	* [flannel-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-194000" primary control-plane node in "flannel-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:47:15.260041   14091 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:47:15.260208   14091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:47:15.260211   14091 out.go:358] Setting ErrFile to fd 2...
	I1010 11:47:15.260213   14091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:47:15.260336   14091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:47:15.261514   14091 out.go:352] Setting JSON to false
	I1010 11:47:15.279261   14091 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8206,"bootTime":1728577829,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:47:15.279329   14091 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:47:15.285462   14091 out.go:177] * [flannel-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:47:15.293396   14091 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:47:15.293523   14091 notify.go:220] Checking for updates...
	I1010 11:47:15.300433   14091 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:47:15.303401   14091 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:47:15.306418   14091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:47:15.309434   14091 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:47:15.312318   14091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:47:15.315700   14091 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:47:15.315781   14091 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:47:15.315840   14091 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:47:15.320321   14091 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:47:15.327404   14091 start.go:297] selected driver: qemu2
	I1010 11:47:15.327412   14091 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:47:15.327419   14091 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:47:15.329945   14091 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:47:15.333359   14091 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:47:15.336450   14091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:47:15.336466   14091 cni.go:84] Creating CNI manager for "flannel"
	I1010 11:47:15.336469   14091 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1010 11:47:15.336509   14091 start.go:340] cluster config:
	{Name:flannel-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:47:15.341146   14091 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:47:15.349185   14091 out.go:177] * Starting "flannel-194000" primary control-plane node in "flannel-194000" cluster
	I1010 11:47:15.353399   14091 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:47:15.353419   14091 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:47:15.353431   14091 cache.go:56] Caching tarball of preloaded images
	I1010 11:47:15.353517   14091 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:47:15.353522   14091 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:47:15.353582   14091 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/flannel-194000/config.json ...
	I1010 11:47:15.353595   14091 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/flannel-194000/config.json: {Name:mk0506658b7d90876daa80bf559ec0d3ec5afe7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:47:15.353923   14091 start.go:360] acquireMachinesLock for flannel-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:47:15.353969   14091 start.go:364] duration metric: took 40.833µs to acquireMachinesLock for "flannel-194000"
	I1010 11:47:15.353982   14091 start.go:93] Provisioning new machine with config: &{Name:flannel-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:47:15.354015   14091 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:47:15.361384   14091 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:47:15.377779   14091 start.go:159] libmachine.API.Create for "flannel-194000" (driver="qemu2")
	I1010 11:47:15.377807   14091 client.go:168] LocalClient.Create starting
	I1010 11:47:15.377881   14091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:47:15.377920   14091 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:15.377933   14091 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:15.377973   14091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:47:15.378008   14091 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:15.378016   14091 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:15.378474   14091 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:47:15.537504   14091 main.go:141] libmachine: Creating SSH key...
	I1010 11:47:15.658842   14091 main.go:141] libmachine: Creating Disk image...
	I1010 11:47:15.658849   14091 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:47:15.659051   14091 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2
	I1010 11:47:15.668965   14091 main.go:141] libmachine: STDOUT: 
	I1010 11:47:15.668982   14091 main.go:141] libmachine: STDERR: 
	I1010 11:47:15.669044   14091 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2 +20000M
	I1010 11:47:15.677699   14091 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:47:15.677712   14091 main.go:141] libmachine: STDERR: 
	I1010 11:47:15.677725   14091 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2
	I1010 11:47:15.677736   14091 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:47:15.677750   14091 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:47:15.677778   14091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:32:bc:1e:28:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2
	I1010 11:47:15.679558   14091 main.go:141] libmachine: STDOUT: 
	I1010 11:47:15.679571   14091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:47:15.679587   14091 client.go:171] duration metric: took 301.780667ms to LocalClient.Create
	I1010 11:47:17.681743   14091 start.go:128] duration metric: took 2.327738042s to createHost
	I1010 11:47:17.681796   14091 start.go:83] releasing machines lock for "flannel-194000", held for 2.327851708s
	W1010 11:47:17.681864   14091 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:17.691492   14091 out.go:177] * Deleting "flannel-194000" in qemu2 ...
	W1010 11:47:17.710237   14091 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:17.710259   14091 start.go:729] Will try again in 5 seconds ...
	I1010 11:47:22.712447   14091 start.go:360] acquireMachinesLock for flannel-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:47:22.713078   14091 start.go:364] duration metric: took 525.625µs to acquireMachinesLock for "flannel-194000"
	I1010 11:47:22.713221   14091 start.go:93] Provisioning new machine with config: &{Name:flannel-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:47:22.713547   14091 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:47:22.720308   14091 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:47:22.769271   14091 start.go:159] libmachine.API.Create for "flannel-194000" (driver="qemu2")
	I1010 11:47:22.769337   14091 client.go:168] LocalClient.Create starting
	I1010 11:47:22.769480   14091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:47:22.769576   14091 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:22.769591   14091 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:22.769674   14091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:47:22.769732   14091 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:22.769751   14091 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:22.770441   14091 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:47:22.937943   14091 main.go:141] libmachine: Creating SSH key...
	I1010 11:47:23.053567   14091 main.go:141] libmachine: Creating Disk image...
	I1010 11:47:23.053577   14091 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:47:23.053770   14091 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2
	I1010 11:47:23.063601   14091 main.go:141] libmachine: STDOUT: 
	I1010 11:47:23.063627   14091 main.go:141] libmachine: STDERR: 
	I1010 11:47:23.063681   14091 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2 +20000M
	I1010 11:47:23.072429   14091 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:47:23.072443   14091 main.go:141] libmachine: STDERR: 
	I1010 11:47:23.072455   14091 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2
	I1010 11:47:23.072461   14091 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:47:23.072473   14091 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:47:23.072501   14091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:50:c1:f8:a4:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/flannel-194000/disk.qcow2
	I1010 11:47:23.074285   14091 main.go:141] libmachine: STDOUT: 
	I1010 11:47:23.074299   14091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:47:23.074319   14091 client.go:171] duration metric: took 304.97975ms to LocalClient.Create
	I1010 11:47:25.076482   14091 start.go:128] duration metric: took 2.362929875s to createHost
	I1010 11:47:25.076555   14091 start.go:83] releasing machines lock for "flannel-194000", held for 2.363487s
	W1010 11:47:25.076973   14091 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:25.088516   14091 out.go:201] 
	W1010 11:47:25.092717   14091 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:47:25.092738   14091 out.go:270] * 
	* 
	W1010 11:47:25.094645   14091 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:47:25.105649   14091 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.903920333s)

                                                
                                                
-- stdout --
	* [bridge-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-194000" primary control-plane node in "bridge-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:47:27.705927   14209 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:47:27.706091   14209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:47:27.706094   14209 out.go:358] Setting ErrFile to fd 2...
	I1010 11:47:27.706096   14209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:47:27.706231   14209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:47:27.707377   14209 out.go:352] Setting JSON to false
	I1010 11:47:27.725509   14209 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8218,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:47:27.725581   14209 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:47:27.730197   14209 out.go:177] * [bridge-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:47:27.737978   14209 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:47:27.738052   14209 notify.go:220] Checking for updates...
	I1010 11:47:27.745158   14209 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:47:27.746626   14209 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:47:27.750171   14209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:47:27.753170   14209 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:47:27.756251   14209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:47:27.759548   14209 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:47:27.759620   14209 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:47:27.759664   14209 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:47:27.764162   14209 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:47:27.771181   14209 start.go:297] selected driver: qemu2
	I1010 11:47:27.771188   14209 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:47:27.771196   14209 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:47:27.773683   14209 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:47:27.776144   14209 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:47:27.779268   14209 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:47:27.779287   14209 cni.go:84] Creating CNI manager for "bridge"
	I1010 11:47:27.779291   14209 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:47:27.779325   14209 start.go:340] cluster config:
	{Name:bridge-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:47:27.783928   14209 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:47:27.792149   14209 out.go:177] * Starting "bridge-194000" primary control-plane node in "bridge-194000" cluster
	I1010 11:47:27.796077   14209 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:47:27.796101   14209 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:47:27.796109   14209 cache.go:56] Caching tarball of preloaded images
	I1010 11:47:27.796199   14209 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:47:27.796204   14209 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:47:27.796263   14209 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/bridge-194000/config.json ...
	I1010 11:47:27.796274   14209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/bridge-194000/config.json: {Name:mk2956de70e8d363428af0fed225961632162c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:47:27.796544   14209 start.go:360] acquireMachinesLock for bridge-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:47:27.796589   14209 start.go:364] duration metric: took 40.208µs to acquireMachinesLock for "bridge-194000"
	I1010 11:47:27.796602   14209 start.go:93] Provisioning new machine with config: &{Name:bridge-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:47:27.796637   14209 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:47:27.799154   14209 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:47:27.814828   14209 start.go:159] libmachine.API.Create for "bridge-194000" (driver="qemu2")
	I1010 11:47:27.814863   14209 client.go:168] LocalClient.Create starting
	I1010 11:47:27.814942   14209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:47:27.814982   14209 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:27.814997   14209 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:27.815040   14209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:47:27.815069   14209 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:27.815078   14209 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:27.815451   14209 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:47:27.970623   14209 main.go:141] libmachine: Creating SSH key...
	I1010 11:47:28.034612   14209 main.go:141] libmachine: Creating Disk image...
	I1010 11:47:28.034620   14209 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:47:28.034809   14209 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2
	I1010 11:47:28.045168   14209 main.go:141] libmachine: STDOUT: 
	I1010 11:47:28.045193   14209 main.go:141] libmachine: STDERR: 
	I1010 11:47:28.045266   14209 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2 +20000M
	I1010 11:47:28.053853   14209 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:47:28.053869   14209 main.go:141] libmachine: STDERR: 
	I1010 11:47:28.053893   14209 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2
	I1010 11:47:28.053899   14209 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:47:28.053912   14209 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:47:28.053944   14209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:ad:38:98:5e:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2
	I1010 11:47:28.055778   14209 main.go:141] libmachine: STDOUT: 
	I1010 11:47:28.055791   14209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:47:28.055808   14209 client.go:171] duration metric: took 240.941958ms to LocalClient.Create
	I1010 11:47:30.058020   14209 start.go:128] duration metric: took 2.261379584s to createHost
	I1010 11:47:30.058088   14209 start.go:83] releasing machines lock for "bridge-194000", held for 2.261519792s
	W1010 11:47:30.058133   14209 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:30.071071   14209 out.go:177] * Deleting "bridge-194000" in qemu2 ...
	W1010 11:47:30.092936   14209 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:30.092962   14209 start.go:729] Will try again in 5 seconds ...
	I1010 11:47:35.095181   14209 start.go:360] acquireMachinesLock for bridge-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:47:35.095761   14209 start.go:364] duration metric: took 470.166µs to acquireMachinesLock for "bridge-194000"
	I1010 11:47:35.095820   14209 start.go:93] Provisioning new machine with config: &{Name:bridge-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:47:35.096137   14209 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:47:35.105873   14209 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:47:35.153572   14209 start.go:159] libmachine.API.Create for "bridge-194000" (driver="qemu2")
	I1010 11:47:35.153632   14209 client.go:168] LocalClient.Create starting
	I1010 11:47:35.153813   14209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:47:35.153893   14209 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:35.153909   14209 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:35.153976   14209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:47:35.154033   14209 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:35.154048   14209 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:35.154671   14209 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:47:35.323624   14209 main.go:141] libmachine: Creating SSH key...
	I1010 11:47:35.514656   14209 main.go:141] libmachine: Creating Disk image...
	I1010 11:47:35.514666   14209 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:47:35.514915   14209 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2
	I1010 11:47:35.525598   14209 main.go:141] libmachine: STDOUT: 
	I1010 11:47:35.525624   14209 main.go:141] libmachine: STDERR: 
	I1010 11:47:35.525686   14209 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2 +20000M
	I1010 11:47:35.534426   14209 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:47:35.534447   14209 main.go:141] libmachine: STDERR: 
	I1010 11:47:35.534459   14209 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2
	I1010 11:47:35.534465   14209 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:47:35.534473   14209 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:47:35.534498   14209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:33:bb:09:c8:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/bridge-194000/disk.qcow2
	I1010 11:47:35.536457   14209 main.go:141] libmachine: STDOUT: 
	I1010 11:47:35.536470   14209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:47:35.536490   14209 client.go:171] duration metric: took 382.857042ms to LocalClient.Create
	I1010 11:47:37.538531   14209 start.go:128] duration metric: took 2.442401417s to createHost
	I1010 11:47:37.538548   14209 start.go:83] releasing machines lock for "bridge-194000", held for 2.4427995s
	W1010 11:47:37.538652   14209 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:37.552861   14209 out.go:201] 
	W1010 11:47:37.555983   14209 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:47:37.555987   14209 out.go:270] * 
	* 
	W1010 11:47:37.556428   14209 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:47:37.569744   14209 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-194000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.130599125s)

                                                
                                                
-- stdout --
	* [kubenet-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-194000" primary control-plane node in "kubenet-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:47:39.910597   14318 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:47:39.910766   14318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:47:39.910770   14318 out.go:358] Setting ErrFile to fd 2...
	I1010 11:47:39.910772   14318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:47:39.910887   14318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:47:39.912030   14318 out.go:352] Setting JSON to false
	I1010 11:47:39.930859   14318 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8230,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:47:39.930941   14318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:47:39.935681   14318 out.go:177] * [kubenet-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:47:39.943538   14318 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:47:39.943583   14318 notify.go:220] Checking for updates...
	I1010 11:47:39.950610   14318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:47:39.953498   14318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:47:39.956624   14318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:47:39.959664   14318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:47:39.962578   14318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:47:39.965954   14318 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:47:39.966033   14318 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:47:39.966099   14318 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:47:39.973567   14318 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:47:39.984480   14318 start.go:297] selected driver: qemu2
	I1010 11:47:39.984487   14318 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:47:39.984494   14318 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:47:39.987137   14318 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:47:40.007597   14318 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:47:40.015739   14318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:47:40.015769   14318 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1010 11:47:40.015815   14318 start.go:340] cluster config:
	{Name:kubenet-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:47:40.022242   14318 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:47:40.026529   14318 out.go:177] * Starting "kubenet-194000" primary control-plane node in "kubenet-194000" cluster
	I1010 11:47:40.034460   14318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:47:40.034514   14318 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:47:40.034521   14318 cache.go:56] Caching tarball of preloaded images
	I1010 11:47:40.034637   14318 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:47:40.034643   14318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:47:40.034710   14318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/kubenet-194000/config.json ...
	I1010 11:47:40.034721   14318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/kubenet-194000/config.json: {Name:mk4f7922f045526fa9dbfb5a05e4edab5b42dfd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:47:40.035005   14318 start.go:360] acquireMachinesLock for kubenet-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:47:40.035047   14318 start.go:364] duration metric: took 37.459µs to acquireMachinesLock for "kubenet-194000"
	I1010 11:47:40.035060   14318 start.go:93] Provisioning new machine with config: &{Name:kubenet-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:47:40.035100   14318 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:47:40.039561   14318 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:47:40.057493   14318 start.go:159] libmachine.API.Create for "kubenet-194000" (driver="qemu2")
	I1010 11:47:40.057531   14318 client.go:168] LocalClient.Create starting
	I1010 11:47:40.057631   14318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:47:40.057671   14318 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:40.057685   14318 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:40.057727   14318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:47:40.057759   14318 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:40.057767   14318 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:40.058151   14318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:47:40.352876   14318 main.go:141] libmachine: Creating SSH key...
	I1010 11:47:40.556934   14318 main.go:141] libmachine: Creating Disk image...
	I1010 11:47:40.556947   14318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:47:40.559890   14318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2
	I1010 11:47:40.575461   14318 main.go:141] libmachine: STDOUT: 
	I1010 11:47:40.575500   14318 main.go:141] libmachine: STDERR: 
	I1010 11:47:40.575583   14318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2 +20000M
	I1010 11:47:40.585830   14318 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:47:40.585851   14318 main.go:141] libmachine: STDERR: 
	I1010 11:47:40.585875   14318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2
	I1010 11:47:40.585884   14318 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:47:40.585898   14318 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:47:40.585929   14318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:35:1e:35:da:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2
	I1010 11:47:40.588222   14318 main.go:141] libmachine: STDOUT: 
	I1010 11:47:40.588237   14318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:47:40.588264   14318 client.go:171] duration metric: took 530.734666ms to LocalClient.Create
	I1010 11:47:42.590419   14318 start.go:128] duration metric: took 2.555326334s to createHost
	I1010 11:47:42.590508   14318 start.go:83] releasing machines lock for "kubenet-194000", held for 2.555486334s
	W1010 11:47:42.590580   14318 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:42.597438   14318 out.go:177] * Deleting "kubenet-194000" in qemu2 ...
	W1010 11:47:42.622191   14318 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:42.622219   14318 start.go:729] Will try again in 5 seconds ...
	I1010 11:47:47.624317   14318 start.go:360] acquireMachinesLock for kubenet-194000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:47:47.624574   14318 start.go:364] duration metric: took 216.708µs to acquireMachinesLock for "kubenet-194000"
	I1010 11:47:47.624604   14318 start.go:93] Provisioning new machine with config: &{Name:kubenet-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:47:47.624696   14318 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:47:47.634023   14318 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1010 11:47:47.655321   14318 start.go:159] libmachine.API.Create for "kubenet-194000" (driver="qemu2")
	I1010 11:47:47.655358   14318 client.go:168] LocalClient.Create starting
	I1010 11:47:47.655442   14318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:47:47.655490   14318 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:47.655498   14318 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:47.655540   14318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:47:47.655574   14318 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:47.655594   14318 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:47.656020   14318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:47:47.811304   14318 main.go:141] libmachine: Creating SSH key...
	I1010 11:47:47.944442   14318 main.go:141] libmachine: Creating Disk image...
	I1010 11:47:47.944455   14318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:47:47.944640   14318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2
	I1010 11:47:47.954504   14318 main.go:141] libmachine: STDOUT: 
	I1010 11:47:47.954526   14318 main.go:141] libmachine: STDERR: 
	I1010 11:47:47.954591   14318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2 +20000M
	I1010 11:47:47.963067   14318 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:47:47.963080   14318 main.go:141] libmachine: STDERR: 
	I1010 11:47:47.963092   14318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2
	I1010 11:47:47.963100   14318 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:47:47.963108   14318 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:47:47.963150   14318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:b5:06:3a:bd:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/kubenet-194000/disk.qcow2
	I1010 11:47:47.964962   14318 main.go:141] libmachine: STDOUT: 
	I1010 11:47:47.964976   14318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:47:47.964991   14318 client.go:171] duration metric: took 309.63175ms to LocalClient.Create
	I1010 11:47:49.967255   14318 start.go:128] duration metric: took 2.342524042s to createHost
	I1010 11:47:49.967362   14318 start.go:83] releasing machines lock for "kubenet-194000", held for 2.342804917s
	W1010 11:47:49.967699   14318 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:49.977164   14318 out.go:201] 
	W1010 11:47:49.983345   14318 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:47:49.983391   14318 out.go:270] * 
	* 
	W1010 11:47:49.985943   14318 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:47:49.994195   14318 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-829000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-829000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.935309041s)

                                                
                                                
-- stdout --
	* [old-k8s-version-829000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-829000" primary control-plane node in "old-k8s-version-829000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-829000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:47:52.378397   14434 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:47:52.378550   14434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:47:52.378554   14434 out.go:358] Setting ErrFile to fd 2...
	I1010 11:47:52.378556   14434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:47:52.378695   14434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:47:52.379885   14434 out.go:352] Setting JSON to false
	I1010 11:47:52.397907   14434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8243,"bootTime":1728577829,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:47:52.397982   14434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:47:52.403725   14434 out.go:177] * [old-k8s-version-829000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:47:52.411787   14434 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:47:52.411831   14434 notify.go:220] Checking for updates...
	I1010 11:47:52.418753   14434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:47:52.421783   14434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:47:52.424693   14434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:47:52.427794   14434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:47:52.430852   14434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:47:52.434065   14434 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:47:52.434143   14434 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:47:52.434184   14434 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:47:52.438762   14434 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:47:52.445664   14434 start.go:297] selected driver: qemu2
	I1010 11:47:52.445671   14434 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:47:52.445678   14434 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:47:52.448105   14434 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:47:52.450761   14434 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:47:52.453875   14434 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:47:52.453900   14434 cni.go:84] Creating CNI manager for ""
	I1010 11:47:52.453927   14434 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1010 11:47:52.453947   14434 start.go:340] cluster config:
	{Name:old-k8s-version-829000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:47:52.458210   14434 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:47:52.466740   14434 out.go:177] * Starting "old-k8s-version-829000" primary control-plane node in "old-k8s-version-829000" cluster
	I1010 11:47:52.470532   14434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1010 11:47:52.470546   14434 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1010 11:47:52.470552   14434 cache.go:56] Caching tarball of preloaded images
	I1010 11:47:52.470612   14434 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:47:52.470617   14434 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1010 11:47:52.470661   14434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/old-k8s-version-829000/config.json ...
	I1010 11:47:52.470670   14434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/old-k8s-version-829000/config.json: {Name:mke6b992e89de3e5e42ed99429e1119332446ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:47:52.470900   14434 start.go:360] acquireMachinesLock for old-k8s-version-829000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:47:52.470941   14434 start.go:364] duration metric: took 36.042µs to acquireMachinesLock for "old-k8s-version-829000"
	I1010 11:47:52.470953   14434 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:47:52.470987   14434 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:47:52.478603   14434 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:47:52.494114   14434 start.go:159] libmachine.API.Create for "old-k8s-version-829000" (driver="qemu2")
	I1010 11:47:52.494146   14434 client.go:168] LocalClient.Create starting
	I1010 11:47:52.494233   14434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:47:52.494272   14434 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:52.494285   14434 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:52.494327   14434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:47:52.494362   14434 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:52.494369   14434 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:52.494754   14434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:47:52.649630   14434 main.go:141] libmachine: Creating SSH key...
	I1010 11:47:52.772738   14434 main.go:141] libmachine: Creating Disk image...
	I1010 11:47:52.772746   14434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:47:52.772943   14434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2
	I1010 11:47:52.782987   14434 main.go:141] libmachine: STDOUT: 
	I1010 11:47:52.783011   14434 main.go:141] libmachine: STDERR: 
	I1010 11:47:52.783077   14434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2 +20000M
	I1010 11:47:52.792089   14434 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:47:52.792106   14434 main.go:141] libmachine: STDERR: 
	I1010 11:47:52.792129   14434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2
	I1010 11:47:52.792136   14434 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:47:52.792149   14434 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:47:52.792186   14434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:8e:8a:3c:df:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2
	I1010 11:47:52.794101   14434 main.go:141] libmachine: STDOUT: 
	I1010 11:47:52.794113   14434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:47:52.794136   14434 client.go:171] duration metric: took 299.986417ms to LocalClient.Create
	I1010 11:47:54.796364   14434 start.go:128] duration metric: took 2.325362833s to createHost
	I1010 11:47:54.796454   14434 start.go:83] releasing machines lock for "old-k8s-version-829000", held for 2.325534625s
	W1010 11:47:54.796499   14434 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:54.810401   14434 out.go:177] * Deleting "old-k8s-version-829000" in qemu2 ...
	W1010 11:47:54.832967   14434 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:47:54.832993   14434 start.go:729] Will try again in 5 seconds ...
	I1010 11:47:59.835175   14434 start.go:360] acquireMachinesLock for old-k8s-version-829000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:47:59.835739   14434 start.go:364] duration metric: took 477.458µs to acquireMachinesLock for "old-k8s-version-829000"
	I1010 11:47:59.835888   14434 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:47:59.836159   14434 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:47:59.845693   14434 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:47:59.890707   14434 start.go:159] libmachine.API.Create for "old-k8s-version-829000" (driver="qemu2")
	I1010 11:47:59.890762   14434 client.go:168] LocalClient.Create starting
	I1010 11:47:59.890915   14434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:47:59.891003   14434 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:59.891021   14434 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:59.891081   14434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:47:59.891137   14434 main.go:141] libmachine: Decoding PEM data...
	I1010 11:47:59.891148   14434 main.go:141] libmachine: Parsing certificate...
	I1010 11:47:59.891736   14434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:48:00.060579   14434 main.go:141] libmachine: Creating SSH key...
	I1010 11:48:00.213032   14434 main.go:141] libmachine: Creating Disk image...
	I1010 11:48:00.213041   14434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:48:00.213237   14434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2
	I1010 11:48:00.223191   14434 main.go:141] libmachine: STDOUT: 
	I1010 11:48:00.223208   14434 main.go:141] libmachine: STDERR: 
	I1010 11:48:00.223264   14434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2 +20000M
	I1010 11:48:00.231970   14434 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:48:00.231985   14434 main.go:141] libmachine: STDERR: 
	I1010 11:48:00.231998   14434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2
	I1010 11:48:00.232004   14434 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:48:00.232013   14434 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:00.232048   14434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:b5:03:de:8d:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2
	I1010 11:48:00.233947   14434 main.go:141] libmachine: STDOUT: 
	I1010 11:48:00.233961   14434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:00.233973   14434 client.go:171] duration metric: took 343.209958ms to LocalClient.Create
	I1010 11:48:02.236157   14434 start.go:128] duration metric: took 2.399987292s to createHost
	I1010 11:48:02.236260   14434 start.go:83] releasing machines lock for "old-k8s-version-829000", held for 2.400529458s
	W1010 11:48:02.236686   14434 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-829000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-829000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:02.249278   14434 out.go:201] 
	W1010 11:48:02.252530   14434 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:02.252558   14434 out.go:270] * 
	* 
	W1010 11:48:02.255094   14434 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:48:02.266202   14434 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-829000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (70.636917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-829000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-829000 create -f testdata/busybox.yaml: exit status 1 (30.422625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-829000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-829000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (34.319042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-829000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (33.136583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-829000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-829000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-829000 describe deploy/metrics-server -n kube-system: exit status 1 (27.851666ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-829000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-829000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (34.337917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-829000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-829000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.1988435s)

                                                
                                                
-- stdout --
	* [old-k8s-version-829000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-829000" primary control-plane node in "old-k8s-version-829000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-829000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-829000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:06.361815   14486 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:06.361968   14486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:06.361972   14486 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:06.361974   14486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:06.362103   14486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:06.363207   14486 out.go:352] Setting JSON to false
	I1010 11:48:06.381154   14486 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8257,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:48:06.381221   14486 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:48:06.386641   14486 out.go:177] * [old-k8s-version-829000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:48:06.393419   14486 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:48:06.393449   14486 notify.go:220] Checking for updates...
	I1010 11:48:06.400592   14486 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:48:06.401932   14486 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:48:06.404537   14486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:48:06.407650   14486 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:48:06.410620   14486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:48:06.413813   14486 config.go:182] Loaded profile config "old-k8s-version-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1010 11:48:06.417586   14486 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1010 11:48:06.420628   14486 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:48:06.424621   14486 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:48:06.431603   14486 start.go:297] selected driver: qemu2
	I1010 11:48:06.431608   14486 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:06.431661   14486 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:48:06.434134   14486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:48:06.434193   14486 cni.go:84] Creating CNI manager for ""
	I1010 11:48:06.434211   14486 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1010 11:48:06.434232   14486 start.go:340] cluster config:
	{Name:old-k8s-version-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-829000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:06.438436   14486 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:06.446632   14486 out.go:177] * Starting "old-k8s-version-829000" primary control-plane node in "old-k8s-version-829000" cluster
	I1010 11:48:06.451843   14486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1010 11:48:06.451857   14486 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1010 11:48:06.451866   14486 cache.go:56] Caching tarball of preloaded images
	I1010 11:48:06.451938   14486 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:48:06.451943   14486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1010 11:48:06.451995   14486 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/old-k8s-version-829000/config.json ...
	I1010 11:48:06.452425   14486 start.go:360] acquireMachinesLock for old-k8s-version-829000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:06.452452   14486 start.go:364] duration metric: took 21.667µs to acquireMachinesLock for "old-k8s-version-829000"
	I1010 11:48:06.452461   14486 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:48:06.452465   14486 fix.go:54] fixHost starting: 
	I1010 11:48:06.452568   14486 fix.go:112] recreateIfNeeded on old-k8s-version-829000: state=Stopped err=<nil>
	W1010 11:48:06.452575   14486 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:48:06.456675   14486 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-829000" ...
	I1010 11:48:06.464615   14486 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:06.464649   14486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:b5:03:de:8d:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2
	I1010 11:48:06.466692   14486 main.go:141] libmachine: STDOUT: 
	I1010 11:48:06.466708   14486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:06.466735   14486 fix.go:56] duration metric: took 14.268458ms for fixHost
	I1010 11:48:06.466739   14486 start.go:83] releasing machines lock for "old-k8s-version-829000", held for 14.283333ms
	W1010 11:48:06.466744   14486 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:06.466786   14486 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:06.466790   14486 start.go:729] Will try again in 5 seconds ...
	I1010 11:48:11.468919   14486 start.go:360] acquireMachinesLock for old-k8s-version-829000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:11.469454   14486 start.go:364] duration metric: took 404.916µs to acquireMachinesLock for "old-k8s-version-829000"
	I1010 11:48:11.469691   14486 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:48:11.469713   14486 fix.go:54] fixHost starting: 
	I1010 11:48:11.470490   14486 fix.go:112] recreateIfNeeded on old-k8s-version-829000: state=Stopped err=<nil>
	W1010 11:48:11.470520   14486 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:48:11.478057   14486 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-829000" ...
	I1010 11:48:11.481977   14486 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:11.482185   14486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:b5:03:de:8d:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/old-k8s-version-829000/disk.qcow2
	I1010 11:48:11.492771   14486 main.go:141] libmachine: STDOUT: 
	I1010 11:48:11.492843   14486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:11.492944   14486 fix.go:56] duration metric: took 23.235583ms for fixHost
	I1010 11:48:11.492961   14486 start.go:83] releasing machines lock for "old-k8s-version-829000", held for 23.411167ms
	W1010 11:48:11.493128   14486 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-829000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-829000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:11.500949   14486 out.go:201] 
	W1010 11:48:11.505012   14486 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:11.505040   14486 out.go:270] * 
	* 
	W1010 11:48:11.507650   14486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:48:11.515971   14486 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-829000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (64.529791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-829000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (35.611542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-829000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-829000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-829000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.924541ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-829000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-829000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (33.357791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-829000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (34.223167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-829000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-829000 --alsologtostderr -v=1: exit status 83 (45.047167ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-829000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:11.804095   14505 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:11.805272   14505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:11.805281   14505 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:11.805283   14505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:11.805511   14505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:11.805725   14505 out.go:352] Setting JSON to false
	I1010 11:48:11.805733   14505 mustload.go:65] Loading cluster: old-k8s-version-829000
	I1010 11:48:11.805980   14505 config.go:182] Loaded profile config "old-k8s-version-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1010 11:48:11.809486   14505 out.go:177] * The control-plane node old-k8s-version-829000 host is not running: state=Stopped
	I1010 11:48:11.812522   14505 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-829000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-829000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (33.417125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-829000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (33.495708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-477000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-477000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.733566333s)

                                                
                                                
-- stdout --
	* [no-preload-477000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-477000" primary control-plane node in "no-preload-477000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-477000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:12.142987   14522 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:12.143155   14522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:12.143159   14522 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:12.143161   14522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:12.143285   14522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:12.144477   14522 out.go:352] Setting JSON to false
	I1010 11:48:12.162401   14522 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8263,"bootTime":1728577829,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:48:12.162472   14522 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:48:12.164768   14522 out.go:177] * [no-preload-477000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:48:12.171495   14522 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:48:12.171542   14522 notify.go:220] Checking for updates...
	I1010 11:48:12.177461   14522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:48:12.180472   14522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:48:12.181789   14522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:48:12.184441   14522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:48:12.187470   14522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:48:12.190881   14522 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:12.190943   14522 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1010 11:48:12.191003   14522 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:48:12.195368   14522 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:48:12.202426   14522 start.go:297] selected driver: qemu2
	I1010 11:48:12.202433   14522 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:48:12.202438   14522 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:48:12.204768   14522 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:48:12.207446   14522 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:48:12.210474   14522 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:48:12.210487   14522 cni.go:84] Creating CNI manager for ""
	I1010 11:48:12.210506   14522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:48:12.210511   14522 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:48:12.210544   14522 start.go:340] cluster config:
	{Name:no-preload-477000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-477000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:12.214750   14522 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:12.222426   14522 out.go:177] * Starting "no-preload-477000" primary control-plane node in "no-preload-477000" cluster
	I1010 11:48:12.226410   14522 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:48:12.226487   14522 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/no-preload-477000/config.json ...
	I1010 11:48:12.226505   14522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/no-preload-477000/config.json: {Name:mk24c386f44ddb2f9c4132d6d53ae1c393f96fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:48:12.226519   14522 cache.go:107] acquiring lock: {Name:mk89864d4a71c1101f1bcc3d5dc60cc98a46db0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:12.226573   14522 cache.go:107] acquiring lock: {Name:mk48ccc7e1f292da1098d1628f2f236a0fa934ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:12.226606   14522 cache.go:107] acquiring lock: {Name:mk1c08651cc89a8cc96eca6b6490986782a4bb86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:12.226601   14522 cache.go:115] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1010 11:48:12.226631   14522 cache.go:107] acquiring lock: {Name:mkcda3eaa52f041f1788323fe3b9624b56f93604 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:12.226637   14522 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.5µs
	I1010 11:48:12.226663   14522 cache.go:107] acquiring lock: {Name:mk89508dbad6a8d123fb29f4c04d1b5b31de7ae0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:12.226677   14522 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1010 11:48:12.226634   14522 cache.go:107] acquiring lock: {Name:mk0642d648a62534863974347976a004fbcb005e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:12.226582   14522 cache.go:107] acquiring lock: {Name:mk1c0a870839dbd20a62a781ac3d2d9650e516e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:12.226636   14522 cache.go:107] acquiring lock: {Name:mkebe654ccfa5a266d046cccf3f3b48d317c9426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:12.226899   14522 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1010 11:48:12.226935   14522 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 11:48:12.227026   14522 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 11:48:12.227062   14522 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 11:48:12.227085   14522 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 11:48:12.227088   14522 start.go:360] acquireMachinesLock for no-preload-477000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:12.227130   14522 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 11:48:12.227207   14522 start.go:364] duration metric: took 108.417µs to acquireMachinesLock for "no-preload-477000"
	I1010 11:48:12.227234   14522 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1010 11:48:12.227228   14522 start.go:93] Provisioning new machine with config: &{Name:no-preload-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-477000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:48:12.227287   14522 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:48:12.231447   14522 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:48:12.240671   14522 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 11:48:12.240707   14522 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 11:48:12.240783   14522 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1010 11:48:12.241210   14522 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 11:48:12.241221   14522 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1010 11:48:12.241289   14522 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 11:48:12.241301   14522 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 11:48:12.246875   14522 start.go:159] libmachine.API.Create for "no-preload-477000" (driver="qemu2")
	I1010 11:48:12.246894   14522 client.go:168] LocalClient.Create starting
	I1010 11:48:12.246974   14522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:48:12.247009   14522 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:12.247020   14522 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:12.247061   14522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:48:12.247090   14522 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:12.247097   14522 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:12.247450   14522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:48:12.408884   14522 main.go:141] libmachine: Creating SSH key...
	I1010 11:48:12.446283   14522 main.go:141] libmachine: Creating Disk image...
	I1010 11:48:12.446303   14522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:48:12.446987   14522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2
	I1010 11:48:12.456831   14522 main.go:141] libmachine: STDOUT: 
	I1010 11:48:12.456868   14522 main.go:141] libmachine: STDERR: 
	I1010 11:48:12.456947   14522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2 +20000M
	I1010 11:48:12.465959   14522 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:48:12.465979   14522 main.go:141] libmachine: STDERR: 
	I1010 11:48:12.466007   14522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2
	I1010 11:48:12.466011   14522 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:48:12.466027   14522 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:12.466061   14522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:69:be:a3:11:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2
	I1010 11:48:12.468135   14522 main.go:141] libmachine: STDOUT: 
	I1010 11:48:12.468154   14522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:12.468173   14522 client.go:171] duration metric: took 221.276833ms to LocalClient.Create
	I1010 11:48:12.725710   14522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1010 11:48:12.748285   14522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1010 11:48:12.749455   14522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1010 11:48:12.796547   14522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1010 11:48:12.846854   14522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1010 11:48:12.921732   14522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1010 11:48:13.009736   14522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1010 11:48:13.012215   14522 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1010 11:48:13.012238   14522 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 785.70625ms
	I1010 11:48:13.012251   14522 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1010 11:48:14.468386   14522 start.go:128] duration metric: took 2.241101583s to createHost
	I1010 11:48:14.468446   14522 start.go:83] releasing machines lock for "no-preload-477000", held for 2.241255458s
	W1010 11:48:14.468503   14522 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:14.485811   14522 out.go:177] * Deleting "no-preload-477000" in qemu2 ...
	W1010 11:48:14.509616   14522 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:14.509646   14522 start.go:729] Will try again in 5 seconds ...
	I1010 11:48:15.887555   14522 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1010 11:48:15.887628   14522 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.661058834s
	I1010 11:48:15.887657   14522 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1010 11:48:16.188085   14522 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1010 11:48:16.188130   14522 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.961580625s
	I1010 11:48:16.188187   14522 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1010 11:48:16.748017   14522 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1010 11:48:16.748077   14522 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.521494792s
	I1010 11:48:16.748103   14522 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1010 11:48:16.983128   14522 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1010 11:48:16.983197   14522 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.756685625s
	I1010 11:48:16.983224   14522 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1010 11:48:17.317128   14522 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1010 11:48:17.317209   14522 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 5.090604459s
	I1010 11:48:17.317240   14522 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1010 11:48:19.510267   14522 start.go:360] acquireMachinesLock for no-preload-477000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:19.510785   14522 start.go:364] duration metric: took 435.667µs to acquireMachinesLock for "no-preload-477000"
	I1010 11:48:19.510934   14522 start.go:93] Provisioning new machine with config: &{Name:no-preload-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-477000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:48:19.511145   14522 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:48:19.520721   14522 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:48:19.570115   14522 start.go:159] libmachine.API.Create for "no-preload-477000" (driver="qemu2")
	I1010 11:48:19.570164   14522 client.go:168] LocalClient.Create starting
	I1010 11:48:19.570311   14522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:48:19.570414   14522 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:19.570437   14522 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:19.570527   14522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:48:19.570586   14522 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:19.570607   14522 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:19.571220   14522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:48:19.742648   14522 main.go:141] libmachine: Creating SSH key...
	I1010 11:48:19.774732   14522 main.go:141] libmachine: Creating Disk image...
	I1010 11:48:19.774737   14522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:48:19.774903   14522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2
	I1010 11:48:19.784866   14522 main.go:141] libmachine: STDOUT: 
	I1010 11:48:19.784884   14522 main.go:141] libmachine: STDERR: 
	I1010 11:48:19.784968   14522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2 +20000M
	I1010 11:48:19.793588   14522 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:48:19.793604   14522 main.go:141] libmachine: STDERR: 
	I1010 11:48:19.793616   14522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2
	I1010 11:48:19.793623   14522 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:48:19.793633   14522 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:19.793691   14522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:6f:e4:5e:b3:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2
	I1010 11:48:19.795696   14522 main.go:141] libmachine: STDOUT: 
	I1010 11:48:19.795710   14522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:19.795722   14522 client.go:171] duration metric: took 225.55575ms to LocalClient.Create
	I1010 11:48:21.447134   14522 cache.go:157] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1010 11:48:21.447193   14522 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 9.220742833s
	I1010 11:48:21.447256   14522 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1010 11:48:21.447332   14522 cache.go:87] Successfully saved all images to host disk.
	I1010 11:48:21.797861   14522 start.go:128] duration metric: took 2.286713333s to createHost
	I1010 11:48:21.797924   14522 start.go:83] releasing machines lock for "no-preload-477000", held for 2.2871425s
	W1010 11:48:21.798284   14522 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-477000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-477000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:21.811854   14522 out.go:201] 
	W1010 11:48:21.815970   14522 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:21.816000   14522 out.go:270] * 
	* 
	W1010 11:48:21.818559   14522 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:48:21.829887   14522 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-477000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (71.5955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-477000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-509000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-509000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.136477917s)

                                                
                                                
-- stdout --
	* [embed-certs-509000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-509000" primary control-plane node in "embed-certs-509000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-509000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:13.167982   14563 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:13.168118   14563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:13.168121   14563 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:13.168124   14563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:13.168254   14563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:13.169498   14563 out.go:352] Setting JSON to false
	I1010 11:48:13.187734   14563 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8264,"bootTime":1728577829,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:48:13.187803   14563 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:48:13.191517   14563 out.go:177] * [embed-certs-509000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:48:13.198490   14563 notify.go:220] Checking for updates...
	I1010 11:48:13.203303   14563 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:48:13.207439   14563 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:48:13.214236   14563 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:48:13.222404   14563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:48:13.229445   14563 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:48:13.236398   14563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:48:13.240778   14563 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:13.240858   14563 config.go:182] Loaded profile config "no-preload-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:13.240911   14563 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:48:13.244461   14563 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:48:13.251401   14563 start.go:297] selected driver: qemu2
	I1010 11:48:13.251407   14563 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:48:13.251412   14563 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:48:13.254052   14563 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:48:13.258477   14563 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:48:13.262463   14563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:48:13.262481   14563 cni.go:84] Creating CNI manager for ""
	I1010 11:48:13.262504   14563 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:48:13.262512   14563 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:48:13.262560   14563 start.go:340] cluster config:
	{Name:embed-certs-509000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:13.267531   14563 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:13.274402   14563 out.go:177] * Starting "embed-certs-509000" primary control-plane node in "embed-certs-509000" cluster
	I1010 11:48:13.278470   14563 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:48:13.278487   14563 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:48:13.278497   14563 cache.go:56] Caching tarball of preloaded images
	I1010 11:48:13.278575   14563 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:48:13.278581   14563 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:48:13.278643   14563 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/embed-certs-509000/config.json ...
	I1010 11:48:13.278655   14563 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/embed-certs-509000/config.json: {Name:mkeaad562bd2e2cc027f44a6ef875634d94da78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:48:13.278934   14563 start.go:360] acquireMachinesLock for embed-certs-509000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:14.468612   14563 start.go:364] duration metric: took 1.189635375s to acquireMachinesLock for "embed-certs-509000"
	I1010 11:48:14.468710   14563 start.go:93] Provisioning new machine with config: &{Name:embed-certs-509000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:48:14.468974   14563 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:48:14.477556   14563 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:48:14.529453   14563 start.go:159] libmachine.API.Create for "embed-certs-509000" (driver="qemu2")
	I1010 11:48:14.529516   14563 client.go:168] LocalClient.Create starting
	I1010 11:48:14.529683   14563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:48:14.529762   14563 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:14.529791   14563 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:14.529873   14563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:48:14.529931   14563 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:14.529950   14563 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:14.530672   14563 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:48:14.698129   14563 main.go:141] libmachine: Creating SSH key...
	I1010 11:48:14.810966   14563 main.go:141] libmachine: Creating Disk image...
	I1010 11:48:14.810975   14563 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:48:14.811197   14563 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2
	I1010 11:48:14.821191   14563 main.go:141] libmachine: STDOUT: 
	I1010 11:48:14.821216   14563 main.go:141] libmachine: STDERR: 
	I1010 11:48:14.821280   14563 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2 +20000M
	I1010 11:48:14.830205   14563 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:48:14.830218   14563 main.go:141] libmachine: STDERR: 
	I1010 11:48:14.830231   14563 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2
	I1010 11:48:14.830237   14563 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:48:14.830252   14563 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:14.830285   14563 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:bd:07:52:41:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2
	I1010 11:48:14.832136   14563 main.go:141] libmachine: STDOUT: 
	I1010 11:48:14.832150   14563 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:14.832166   14563 client.go:171] duration metric: took 302.647667ms to LocalClient.Create
	I1010 11:48:16.834295   14563 start.go:128] duration metric: took 2.365316167s to createHost
	I1010 11:48:16.834369   14563 start.go:83] releasing machines lock for "embed-certs-509000", held for 2.365759167s
	W1010 11:48:16.834432   14563 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:16.845863   14563 out.go:177] * Deleting "embed-certs-509000" in qemu2 ...
	W1010 11:48:16.867398   14563 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:16.867434   14563 start.go:729] Will try again in 5 seconds ...
	I1010 11:48:21.869578   14563 start.go:360] acquireMachinesLock for embed-certs-509000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:21.869802   14563 start.go:364] duration metric: took 156.875µs to acquireMachinesLock for "embed-certs-509000"
	I1010 11:48:21.869853   14563 start.go:93] Provisioning new machine with config: &{Name:embed-certs-509000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:48:21.870030   14563 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:48:21.874673   14563 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:48:21.904192   14563 start.go:159] libmachine.API.Create for "embed-certs-509000" (driver="qemu2")
	I1010 11:48:21.904232   14563 client.go:168] LocalClient.Create starting
	I1010 11:48:21.904318   14563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:48:21.904356   14563 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:21.904366   14563 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:21.904417   14563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:48:21.904443   14563 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:21.904451   14563 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:21.904889   14563 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:48:22.068370   14563 main.go:141] libmachine: Creating SSH key...
	I1010 11:48:22.207265   14563 main.go:141] libmachine: Creating Disk image...
	I1010 11:48:22.207277   14563 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:48:22.210883   14563 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2
	I1010 11:48:22.223456   14563 main.go:141] libmachine: STDOUT: 
	I1010 11:48:22.223486   14563 main.go:141] libmachine: STDERR: 
	I1010 11:48:22.223556   14563 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2 +20000M
	I1010 11:48:22.233219   14563 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:48:22.233243   14563 main.go:141] libmachine: STDERR: 
	I1010 11:48:22.233267   14563 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2
	I1010 11:48:22.233274   14563 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:48:22.233283   14563 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:22.233323   14563 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:8b:06:2e:a5:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2
	I1010 11:48:22.235193   14563 main.go:141] libmachine: STDOUT: 
	I1010 11:48:22.235209   14563 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:22.235224   14563 client.go:171] duration metric: took 330.991959ms to LocalClient.Create
	I1010 11:48:24.236389   14563 start.go:128] duration metric: took 2.366364792s to createHost
	I1010 11:48:24.236469   14563 start.go:83] releasing machines lock for "embed-certs-509000", held for 2.366677708s
	W1010 11:48:24.236903   14563 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-509000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-509000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:24.242669   14563 out.go:201] 
	W1010 11:48:24.246628   14563 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:24.246664   14563 out.go:270] * 
	* 
	W1010 11:48:24.249511   14563 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:48:24.255594   14563 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-509000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (70.137667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-477000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-477000 create -f testdata/busybox.yaml: exit status 1 (32.047833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-477000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-477000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (40.445333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-477000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (39.774417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-477000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-477000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-477000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-477000 describe deploy/metrics-server -n kube-system: exit status 1 (27.045958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-477000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-477000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (33.055834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-477000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-509000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-509000 create -f testdata/busybox.yaml: exit status 1 (29.044166ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-509000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-509000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (33.131083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-509000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (33.407542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-509000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-509000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-509000 describe deploy/metrics-server -n kube-system: exit status 1 (27.545875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-509000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-509000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (33.694667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-477000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-477000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.195992042s)

                                                
                                                
-- stdout --
	* [no-preload-477000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-477000" primary control-plane node in "no-preload-477000" cluster
	* Restarting existing qemu2 VM for "no-preload-477000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-477000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:25.755852   14641 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:25.756018   14641 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:25.756021   14641 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:25.756023   14641 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:25.756160   14641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:25.757258   14641 out.go:352] Setting JSON to false
	I1010 11:48:25.774816   14641 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8276,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:48:25.774889   14641 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:48:25.779913   14641 out.go:177] * [no-preload-477000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:48:25.786823   14641 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:48:25.786880   14641 notify.go:220] Checking for updates...
	I1010 11:48:25.794772   14641 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:48:25.797777   14641 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:48:25.800824   14641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:48:25.803828   14641 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:48:25.806737   14641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:48:25.810097   14641 config.go:182] Loaded profile config "no-preload-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:25.810367   14641 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:48:25.814748   14641 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:48:25.821812   14641 start.go:297] selected driver: qemu2
	I1010 11:48:25.821821   14641 start.go:901] validating driver "qemu2" against &{Name:no-preload-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-477000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:25.821883   14641 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:48:25.824463   14641 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:48:25.824491   14641 cni.go:84] Creating CNI manager for ""
	I1010 11:48:25.824521   14641 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:48:25.824550   14641 start.go:340] cluster config:
	{Name:no-preload-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-477000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:25.829131   14641 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:25.836726   14641 out.go:177] * Starting "no-preload-477000" primary control-plane node in "no-preload-477000" cluster
	I1010 11:48:25.840767   14641 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:48:25.840879   14641 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/no-preload-477000/config.json ...
	I1010 11:48:25.840881   14641 cache.go:107] acquiring lock: {Name:mk89864d4a71c1101f1bcc3d5dc60cc98a46db0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:25.840881   14641 cache.go:107] acquiring lock: {Name:mk0642d648a62534863974347976a004fbcb005e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:25.840912   14641 cache.go:107] acquiring lock: {Name:mk89508dbad6a8d123fb29f4c04d1b5b31de7ae0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:25.840978   14641 cache.go:115] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1010 11:48:25.840986   14641 cache.go:115] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1010 11:48:25.840988   14641 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 118.542µs
	I1010 11:48:25.840992   14641 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 126.583µs
	I1010 11:48:25.840993   14641 cache.go:115] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1010 11:48:25.841042   14641 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 173.167µs
	I1010 11:48:25.841048   14641 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1010 11:48:25.840997   14641 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1010 11:48:25.841041   14641 cache.go:107] acquiring lock: {Name:mk48ccc7e1f292da1098d1628f2f236a0fa934ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:25.841056   14641 cache.go:107] acquiring lock: {Name:mk1c08651cc89a8cc96eca6b6490986782a4bb86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:25.840994   14641 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1010 11:48:25.841000   14641 cache.go:107] acquiring lock: {Name:mk1c0a870839dbd20a62a781ac3d2d9650e516e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:25.841004   14641 cache.go:107] acquiring lock: {Name:mkcda3eaa52f041f1788323fe3b9624b56f93604 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:25.841105   14641 cache.go:115] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1010 11:48:25.841011   14641 cache.go:107] acquiring lock: {Name:mkebe654ccfa5a266d046cccf3f3b48d317c9426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:25.841138   14641 cache.go:115] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1010 11:48:25.841142   14641 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 103.292µs
	I1010 11:48:25.841146   14641 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1010 11:48:25.841113   14641 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 105.417µs
	I1010 11:48:25.841153   14641 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1010 11:48:25.841155   14641 cache.go:115] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1010 11:48:25.841164   14641 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 165.167µs
	I1010 11:48:25.841169   14641 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1010 11:48:25.841159   14641 cache.go:115] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1010 11:48:25.841173   14641 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 170.5µs
	I1010 11:48:25.841178   14641 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1010 11:48:25.841283   14641 cache.go:115] /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1010 11:48:25.841287   14641 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 276.791µs
	I1010 11:48:25.841292   14641 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1010 11:48:25.841296   14641 cache.go:87] Successfully saved all images to host disk.
	I1010 11:48:25.841333   14641 start.go:360] acquireMachinesLock for no-preload-477000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:25.841365   14641 start.go:364] duration metric: took 26.542µs to acquireMachinesLock for "no-preload-477000"
	I1010 11:48:25.841380   14641 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:48:25.841384   14641 fix.go:54] fixHost starting: 
	I1010 11:48:25.841515   14641 fix.go:112] recreateIfNeeded on no-preload-477000: state=Stopped err=<nil>
	W1010 11:48:25.841523   14641 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:48:25.849836   14641 out.go:177] * Restarting existing qemu2 VM for "no-preload-477000" ...
	I1010 11:48:25.853757   14641 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:25.853800   14641 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:6f:e4:5e:b3:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2
	I1010 11:48:25.856147   14641 main.go:141] libmachine: STDOUT: 
	I1010 11:48:25.856168   14641 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:25.856193   14641 fix.go:56] duration metric: took 14.806709ms for fixHost
	I1010 11:48:25.856198   14641 start.go:83] releasing machines lock for "no-preload-477000", held for 14.828ms
	W1010 11:48:25.856205   14641 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:25.856236   14641 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:25.856242   14641 start.go:729] Will try again in 5 seconds ...
	I1010 11:48:30.858403   14641 start.go:360] acquireMachinesLock for no-preload-477000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:30.858859   14641 start.go:364] duration metric: took 373.792µs to acquireMachinesLock for "no-preload-477000"
	I1010 11:48:30.859000   14641 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:48:30.859017   14641 fix.go:54] fixHost starting: 
	I1010 11:48:30.859751   14641 fix.go:112] recreateIfNeeded on no-preload-477000: state=Stopped err=<nil>
	W1010 11:48:30.859782   14641 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:48:30.868158   14641 out.go:177] * Restarting existing qemu2 VM for "no-preload-477000" ...
	I1010 11:48:30.873221   14641 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:30.873401   14641 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:6f:e4:5e:b3:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/no-preload-477000/disk.qcow2
	I1010 11:48:30.883563   14641 main.go:141] libmachine: STDOUT: 
	I1010 11:48:30.883631   14641 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:30.883709   14641 fix.go:56] duration metric: took 24.692917ms for fixHost
	I1010 11:48:30.883727   14641 start.go:83] releasing machines lock for "no-preload-477000", held for 24.847333ms
	W1010 11:48:30.883917   14641 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-477000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-477000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:30.891969   14641 out.go:201] 
	W1010 11:48:30.896295   14641 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:30.896344   14641 out.go:270] * 
	* 
	W1010 11:48:30.899316   14641 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:48:30.907175   14641 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-477000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (72.651666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-477000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-509000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-509000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.49994525s)

                                                
                                                
-- stdout --
	* [embed-certs-509000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-509000" primary control-plane node in "embed-certs-509000" cluster
	* Restarting existing qemu2 VM for "embed-certs-509000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-509000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:27.719252   14662 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:27.719416   14662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:27.719420   14662 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:27.719422   14662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:27.719560   14662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:27.720663   14662 out.go:352] Setting JSON to false
	I1010 11:48:27.738163   14662 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8278,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:48:27.738228   14662 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:48:27.742078   14662 out.go:177] * [embed-certs-509000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:48:27.748954   14662 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:48:27.749023   14662 notify.go:220] Checking for updates...
	I1010 11:48:27.757045   14662 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:48:27.760016   14662 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:48:27.762999   14662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:48:27.766050   14662 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:48:27.769016   14662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:48:27.772393   14662 config.go:182] Loaded profile config "embed-certs-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:27.772683   14662 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:48:27.777068   14662 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:48:27.784000   14662 start.go:297] selected driver: qemu2
	I1010 11:48:27.784005   14662 start.go:901] validating driver "qemu2" against &{Name:embed-certs-509000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:27.784079   14662 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:48:27.786604   14662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:48:27.786632   14662 cni.go:84] Creating CNI manager for ""
	I1010 11:48:27.786652   14662 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:48:27.786684   14662 start.go:340] cluster config:
	{Name:embed-certs-509000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-509000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:27.791199   14662 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:27.798035   14662 out.go:177] * Starting "embed-certs-509000" primary control-plane node in "embed-certs-509000" cluster
	I1010 11:48:27.801928   14662 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:48:27.801942   14662 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:48:27.801951   14662 cache.go:56] Caching tarball of preloaded images
	I1010 11:48:27.802029   14662 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:48:27.802034   14662 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:48:27.802085   14662 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/embed-certs-509000/config.json ...
	I1010 11:48:27.802413   14662 start.go:360] acquireMachinesLock for embed-certs-509000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:27.802443   14662 start.go:364] duration metric: took 23.792µs to acquireMachinesLock for "embed-certs-509000"
	I1010 11:48:27.802452   14662 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:48:27.802456   14662 fix.go:54] fixHost starting: 
	I1010 11:48:27.802578   14662 fix.go:112] recreateIfNeeded on embed-certs-509000: state=Stopped err=<nil>
	W1010 11:48:27.802586   14662 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:48:27.810994   14662 out.go:177] * Restarting existing qemu2 VM for "embed-certs-509000" ...
	I1010 11:48:27.813912   14662 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:27.813958   14662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:8b:06:2e:a5:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2
	I1010 11:48:27.816143   14662 main.go:141] libmachine: STDOUT: 
	I1010 11:48:27.816160   14662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:27.816189   14662 fix.go:56] duration metric: took 13.730875ms for fixHost
	I1010 11:48:27.816194   14662 start.go:83] releasing machines lock for "embed-certs-509000", held for 13.747417ms
	W1010 11:48:27.816200   14662 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:27.816244   14662 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:27.816249   14662 start.go:729] Will try again in 5 seconds ...
	I1010 11:48:32.818403   14662 start.go:360] acquireMachinesLock for embed-certs-509000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:34.103653   14662 start.go:364] duration metric: took 1.2851415s to acquireMachinesLock for "embed-certs-509000"
	I1010 11:48:34.103795   14662 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:48:34.103816   14662 fix.go:54] fixHost starting: 
	I1010 11:48:34.104597   14662 fix.go:112] recreateIfNeeded on embed-certs-509000: state=Stopped err=<nil>
	W1010 11:48:34.104624   14662 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:48:34.116207   14662 out.go:177] * Restarting existing qemu2 VM for "embed-certs-509000" ...
	I1010 11:48:34.127132   14662 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:34.127384   14662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:8b:06:2e:a5:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/embed-certs-509000/disk.qcow2
	I1010 11:48:34.139012   14662 main.go:141] libmachine: STDOUT: 
	I1010 11:48:34.139069   14662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:34.139145   14662 fix.go:56] duration metric: took 35.332167ms for fixHost
	I1010 11:48:34.139166   14662 start.go:83] releasing machines lock for "embed-certs-509000", held for 35.477875ms
	W1010 11:48:34.139409   14662 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-509000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-509000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:34.148209   14662 out.go:201] 
	W1010 11:48:34.154332   14662 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:34.154358   14662 out.go:270] * 
	* 
	W1010 11:48:34.156442   14662 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:48:34.170235   14662 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-509000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (70.790291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-477000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (35.728375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-477000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-477000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-477000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-477000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.512625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-477000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-477000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (33.731291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-477000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-477000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (33.407666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-477000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-477000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-477000 --alsologtostderr -v=1: exit status 83 (44.181875ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-477000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-477000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:31.200942   14681 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:31.201135   14681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:31.201138   14681 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:31.201141   14681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:31.201279   14681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:31.201493   14681 out.go:352] Setting JSON to false
	I1010 11:48:31.201501   14681 mustload.go:65] Loading cluster: no-preload-477000
	I1010 11:48:31.201725   14681 config.go:182] Loaded profile config "no-preload-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:31.206190   14681 out.go:177] * The control-plane node no-preload-477000 host is not running: state=Stopped
	I1010 11:48:31.209172   14681 out.go:177]   To start a cluster, run: "minikube start -p no-preload-477000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-477000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (33.396875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-477000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (33.44725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-477000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-320000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-320000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.889635833s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-320000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-320000" primary control-plane node in "default-k8s-diff-port-320000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-320000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:31.662496   14705 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:31.662654   14705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:31.662658   14705 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:31.662661   14705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:31.662789   14705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:31.663963   14705 out.go:352] Setting JSON to false
	I1010 11:48:31.681458   14705 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8282,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:48:31.681534   14705 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:48:31.686358   14705 out.go:177] * [default-k8s-diff-port-320000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:48:31.693333   14705 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:48:31.693461   14705 notify.go:220] Checking for updates...
	I1010 11:48:31.700252   14705 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:48:31.703246   14705 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:48:31.706320   14705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:48:31.709274   14705 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:48:31.712265   14705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:48:31.715608   14705 config.go:182] Loaded profile config "embed-certs-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:31.715673   14705 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:31.715718   14705 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:48:31.720184   14705 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:48:31.727299   14705 start.go:297] selected driver: qemu2
	I1010 11:48:31.727307   14705 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:48:31.727315   14705 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:48:31.729756   14705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:48:31.733212   14705 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:48:31.736402   14705 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:48:31.736425   14705 cni.go:84] Creating CNI manager for ""
	I1010 11:48:31.736454   14705 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:48:31.736461   14705 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:48:31.736491   14705 start.go:340] cluster config:
	{Name:default-k8s-diff-port-320000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-320000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:31.741147   14705 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:31.748224   14705 out.go:177] * Starting "default-k8s-diff-port-320000" primary control-plane node in "default-k8s-diff-port-320000" cluster
	I1010 11:48:31.752266   14705 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:48:31.752280   14705 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:48:31.752289   14705 cache.go:56] Caching tarball of preloaded images
	I1010 11:48:31.752364   14705 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:48:31.752370   14705 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:48:31.752431   14705 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/default-k8s-diff-port-320000/config.json ...
	I1010 11:48:31.752443   14705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/default-k8s-diff-port-320000/config.json: {Name:mk12f8b707fc1a377710fb647d400e7be08fb17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:48:31.752833   14705 start.go:360] acquireMachinesLock for default-k8s-diff-port-320000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:31.752886   14705 start.go:364] duration metric: took 44.916µs to acquireMachinesLock for "default-k8s-diff-port-320000"
	I1010 11:48:31.752900   14705 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-320000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:48:31.752927   14705 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:48:31.760283   14705 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:48:31.777906   14705 start.go:159] libmachine.API.Create for "default-k8s-diff-port-320000" (driver="qemu2")
	I1010 11:48:31.777932   14705 client.go:168] LocalClient.Create starting
	I1010 11:48:31.778012   14705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:48:31.778052   14705 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:31.778064   14705 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:31.778103   14705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:48:31.778137   14705 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:31.778146   14705 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:31.778675   14705 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:48:31.935693   14705 main.go:141] libmachine: Creating SSH key...
	I1010 11:48:32.080376   14705 main.go:141] libmachine: Creating Disk image...
	I1010 11:48:32.080385   14705 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:48:32.080617   14705 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2
	I1010 11:48:32.090611   14705 main.go:141] libmachine: STDOUT: 
	I1010 11:48:32.090627   14705 main.go:141] libmachine: STDERR: 
	I1010 11:48:32.090683   14705 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2 +20000M
	I1010 11:48:32.099305   14705 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:48:32.099325   14705 main.go:141] libmachine: STDERR: 
	I1010 11:48:32.099356   14705 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2
	I1010 11:48:32.099368   14705 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:48:32.099381   14705 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:32.099422   14705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:81:1b:fb:c8:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2
	I1010 11:48:32.101247   14705 main.go:141] libmachine: STDOUT: 
	I1010 11:48:32.101260   14705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:32.101279   14705 client.go:171] duration metric: took 323.344959ms to LocalClient.Create
	I1010 11:48:34.103421   14705 start.go:128] duration metric: took 2.3505065s to createHost
	I1010 11:48:34.103488   14705 start.go:83] releasing machines lock for "default-k8s-diff-port-320000", held for 2.350621875s
	W1010 11:48:34.103553   14705 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:34.124239   14705 out.go:177] * Deleting "default-k8s-diff-port-320000" in qemu2 ...
	W1010 11:48:34.172657   14705 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:34.172699   14705 start.go:729] Will try again in 5 seconds ...
	I1010 11:48:39.174908   14705 start.go:360] acquireMachinesLock for default-k8s-diff-port-320000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:39.175455   14705 start.go:364] duration metric: took 412.5µs to acquireMachinesLock for "default-k8s-diff-port-320000"
	I1010 11:48:39.175598   14705 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-320000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:48:39.175905   14705 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:48:39.185542   14705 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:48:39.234412   14705 start.go:159] libmachine.API.Create for "default-k8s-diff-port-320000" (driver="qemu2")
	I1010 11:48:39.234471   14705 client.go:168] LocalClient.Create starting
	I1010 11:48:39.234641   14705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:48:39.234739   14705 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:39.234766   14705 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:39.234844   14705 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:48:39.234903   14705 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:39.234954   14705 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:39.235861   14705 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:48:39.406302   14705 main.go:141] libmachine: Creating SSH key...
	I1010 11:48:39.456737   14705 main.go:141] libmachine: Creating Disk image...
	I1010 11:48:39.456743   14705 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:48:39.456925   14705 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2
	I1010 11:48:39.466558   14705 main.go:141] libmachine: STDOUT: 
	I1010 11:48:39.466581   14705 main.go:141] libmachine: STDERR: 
	I1010 11:48:39.466632   14705 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2 +20000M
	I1010 11:48:39.475052   14705 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:48:39.475069   14705 main.go:141] libmachine: STDERR: 
	I1010 11:48:39.475090   14705 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2
	I1010 11:48:39.475098   14705 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:48:39.475107   14705 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:39.475141   14705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:61:c0:f6:59:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2
	I1010 11:48:39.476914   14705 main.go:141] libmachine: STDOUT: 
	I1010 11:48:39.476927   14705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:39.476947   14705 client.go:171] duration metric: took 242.474042ms to LocalClient.Create
	I1010 11:48:41.479093   14705 start.go:128] duration metric: took 2.303149583s to createHost
	I1010 11:48:41.479206   14705 start.go:83] releasing machines lock for "default-k8s-diff-port-320000", held for 2.303754084s
	W1010 11:48:41.479587   14705 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-320000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-320000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:41.488232   14705 out.go:201] 
	W1010 11:48:41.494265   14705 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:41.494291   14705 out.go:270] * 
	* 
	W1010 11:48:41.497437   14705 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:48:41.506003   14705 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-320000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (70.34675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-509000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (35.804041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-509000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-509000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-509000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.917625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-509000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-509000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (32.821792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-509000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (33.59725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-509000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-509000 --alsologtostderr -v=1: exit status 83 (45.049542ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-509000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-509000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:34.463819   14727 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:34.464015   14727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:34.464018   14727 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:34.464021   14727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:34.464153   14727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:34.464374   14727 out.go:352] Setting JSON to false
	I1010 11:48:34.464382   14727 mustload.go:65] Loading cluster: embed-certs-509000
	I1010 11:48:34.464605   14727 config.go:182] Loaded profile config "embed-certs-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:34.468541   14727 out.go:177] * The control-plane node embed-certs-509000 host is not running: state=Stopped
	I1010 11:48:34.471558   14727 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-509000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-509000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (33.270333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-509000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (33.554125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-688000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-688000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.946238625s)

                                                
                                                
-- stdout --
	* [newest-cni-688000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-688000" primary control-plane node in "newest-cni-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:34.798090   14744 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:34.798275   14744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:34.798278   14744 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:34.798281   14744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:34.798398   14744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:34.799600   14744 out.go:352] Setting JSON to false
	I1010 11:48:34.817539   14744 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8285,"bootTime":1728577829,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:48:34.817620   14744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:48:34.822561   14744 out.go:177] * [newest-cni-688000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:48:34.829472   14744 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:48:34.829581   14744 notify.go:220] Checking for updates...
	I1010 11:48:34.836503   14744 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:48:34.839454   14744 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:48:34.842500   14744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:48:34.845533   14744 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:48:34.848480   14744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:48:34.851897   14744 config.go:182] Loaded profile config "default-k8s-diff-port-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:34.851956   14744 config.go:182] Loaded profile config "multinode-849000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:34.852005   14744 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:48:34.856479   14744 out.go:177] * Using the qemu2 driver based on user configuration
	I1010 11:48:34.863513   14744 start.go:297] selected driver: qemu2
	I1010 11:48:34.863521   14744 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:48:34.863528   14744 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:48:34.866138   14744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1010 11:48:34.866179   14744 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1010 11:48:34.874467   14744 out.go:177] * Automatically selected the socket_vmnet network
	I1010 11:48:34.877611   14744 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 11:48:34.877632   14744 cni.go:84] Creating CNI manager for ""
	I1010 11:48:34.877666   14744 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:48:34.877670   14744 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:48:34.877709   14744 start.go:340] cluster config:
	{Name:newest-cni-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:34.882456   14744 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:34.889486   14744 out.go:177] * Starting "newest-cni-688000" primary control-plane node in "newest-cni-688000" cluster
	I1010 11:48:34.893506   14744 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:48:34.893524   14744 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:48:34.893532   14744 cache.go:56] Caching tarball of preloaded images
	I1010 11:48:34.893615   14744 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:48:34.893621   14744 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:48:34.893694   14744 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/newest-cni-688000/config.json ...
	I1010 11:48:34.893706   14744 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/newest-cni-688000/config.json: {Name:mkd02de570848fc02ac8a3c7daa6fb4f1bf66f74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:48:34.894078   14744 start.go:360] acquireMachinesLock for newest-cni-688000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:34.894126   14744 start.go:364] duration metric: took 41.5µs to acquireMachinesLock for "newest-cni-688000"
	I1010 11:48:34.894138   14744 start.go:93] Provisioning new machine with config: &{Name:newest-cni-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:48:34.894176   14744 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:48:34.898473   14744 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:48:34.915219   14744 start.go:159] libmachine.API.Create for "newest-cni-688000" (driver="qemu2")
	I1010 11:48:34.915245   14744 client.go:168] LocalClient.Create starting
	I1010 11:48:34.915318   14744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:48:34.915355   14744 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:34.915366   14744 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:34.915401   14744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:48:34.915432   14744 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:34.915440   14744 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:34.915783   14744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:48:35.073513   14744 main.go:141] libmachine: Creating SSH key...
	I1010 11:48:35.136991   14744 main.go:141] libmachine: Creating Disk image...
	I1010 11:48:35.136997   14744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:48:35.137181   14744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2
	I1010 11:48:35.147140   14744 main.go:141] libmachine: STDOUT: 
	I1010 11:48:35.147159   14744 main.go:141] libmachine: STDERR: 
	I1010 11:48:35.147217   14744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2 +20000M
	I1010 11:48:35.155653   14744 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:48:35.155673   14744 main.go:141] libmachine: STDERR: 
	I1010 11:48:35.155689   14744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2
	I1010 11:48:35.155695   14744 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:48:35.155707   14744 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:35.155745   14744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:7b:b9:55:ad:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2
	I1010 11:48:35.157546   14744 main.go:141] libmachine: STDOUT: 
	I1010 11:48:35.157562   14744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:35.157580   14744 client.go:171] duration metric: took 242.332875ms to LocalClient.Create
	I1010 11:48:37.159730   14744 start.go:128] duration metric: took 2.265560334s to createHost
	I1010 11:48:37.159827   14744 start.go:83] releasing machines lock for "newest-cni-688000", held for 2.265719958s
	W1010 11:48:37.159871   14744 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:37.170749   14744 out.go:177] * Deleting "newest-cni-688000" in qemu2 ...
	W1010 11:48:37.197048   14744 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:37.197079   14744 start.go:729] Will try again in 5 seconds ...
	I1010 11:48:42.197605   14744 start.go:360] acquireMachinesLock for newest-cni-688000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:42.197846   14744 start.go:364] duration metric: took 182.667µs to acquireMachinesLock for "newest-cni-688000"
	I1010 11:48:42.197951   14744 start.go:93] Provisioning new machine with config: &{Name:newest-cni-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1010 11:48:42.198153   14744 start.go:125] createHost starting for "" (driver="qemu2")
	I1010 11:48:42.206629   14744 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 11:48:42.253715   14744 start.go:159] libmachine.API.Create for "newest-cni-688000" (driver="qemu2")
	I1010 11:48:42.253765   14744 client.go:168] LocalClient.Create starting
	I1010 11:48:42.253853   14744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/ca.pem
	I1010 11:48:42.253907   14744 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:42.253924   14744 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:42.253985   14744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19787-10623/.minikube/certs/cert.pem
	I1010 11:48:42.254017   14744 main.go:141] libmachine: Decoding PEM data...
	I1010 11:48:42.254033   14744 main.go:141] libmachine: Parsing certificate...
	I1010 11:48:42.254584   14744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1010 11:48:42.424398   14744 main.go:141] libmachine: Creating SSH key...
	I1010 11:48:42.634694   14744 main.go:141] libmachine: Creating Disk image...
	I1010 11:48:42.634704   14744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1010 11:48:42.634986   14744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2
	I1010 11:48:42.645428   14744 main.go:141] libmachine: STDOUT: 
	I1010 11:48:42.645445   14744 main.go:141] libmachine: STDERR: 
	I1010 11:48:42.645500   14744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2 +20000M
	I1010 11:48:42.654152   14744 main.go:141] libmachine: STDOUT: Image resized.
	
	I1010 11:48:42.654168   14744 main.go:141] libmachine: STDERR: 
	I1010 11:48:42.654178   14744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2
	I1010 11:48:42.654184   14744 main.go:141] libmachine: Starting QEMU VM...
	I1010 11:48:42.654199   14744 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:42.654236   14744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:43:35:91:68:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2
	I1010 11:48:42.656067   14744 main.go:141] libmachine: STDOUT: 
	I1010 11:48:42.656080   14744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:42.656092   14744 client.go:171] duration metric: took 402.327625ms to LocalClient.Create
	I1010 11:48:44.658265   14744 start.go:128] duration metric: took 2.460112541s to createHost
	I1010 11:48:44.658354   14744 start.go:83] releasing machines lock for "newest-cni-688000", held for 2.460516208s
	W1010 11:48:44.658733   14744 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:44.674424   14744 out.go:201] 
	W1010 11:48:44.678469   14744 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:44.678526   14744 out.go:270] * 
	* 
	W1010 11:48:44.681777   14744 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:48:44.697443   14744 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-688000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000: exit status 7 (68.054375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-320000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-320000 create -f testdata/busybox.yaml: exit status 1 (29.037209ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-320000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-320000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (33.454792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-320000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (33.361791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-320000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-320000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-320000 describe deploy/metrics-server -n kube-system: exit status 1 (27.512375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-320000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-320000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (33.347916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-320000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-320000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.189708542s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-320000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-320000" primary control-plane node in "default-k8s-diff-port-320000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-320000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-320000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:45.778046   14811 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:45.778192   14811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:45.778195   14811 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:45.778198   14811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:45.778329   14811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:45.779409   14811 out.go:352] Setting JSON to false
	I1010 11:48:45.797006   14811 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8296,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:48:45.797081   14811 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:48:45.801371   14811 out.go:177] * [default-k8s-diff-port-320000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:48:45.808305   14811 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:48:45.808350   14811 notify.go:220] Checking for updates...
	I1010 11:48:45.816336   14811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:48:45.819220   14811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:48:45.822271   14811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:48:45.825289   14811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:48:45.828233   14811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:48:45.831581   14811 config.go:182] Loaded profile config "default-k8s-diff-port-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:45.831840   14811 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:48:45.836299   14811 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:48:45.843224   14811 start.go:297] selected driver: qemu2
	I1010 11:48:45.843230   14811 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-320000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:45.843280   14811 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:48:45.845810   14811 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 11:48:45.845837   14811 cni.go:84] Creating CNI manager for ""
	I1010 11:48:45.845858   14811 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:48:45.845890   14811 start.go:340] cluster config:
	{Name:default-k8s-diff-port-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-320000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:45.850313   14811 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:45.858268   14811 out.go:177] * Starting "default-k8s-diff-port-320000" primary control-plane node in "default-k8s-diff-port-320000" cluster
	I1010 11:48:45.861303   14811 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:48:45.861320   14811 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:48:45.861334   14811 cache.go:56] Caching tarball of preloaded images
	I1010 11:48:45.861414   14811 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:48:45.861420   14811 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:48:45.861483   14811 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/default-k8s-diff-port-320000/config.json ...
	I1010 11:48:45.861945   14811 start.go:360] acquireMachinesLock for default-k8s-diff-port-320000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:45.861975   14811 start.go:364] duration metric: took 24.416µs to acquireMachinesLock for "default-k8s-diff-port-320000"
	I1010 11:48:45.861985   14811 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:48:45.861990   14811 fix.go:54] fixHost starting: 
	I1010 11:48:45.862116   14811 fix.go:112] recreateIfNeeded on default-k8s-diff-port-320000: state=Stopped err=<nil>
	W1010 11:48:45.862124   14811 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:48:45.866244   14811 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-320000" ...
	I1010 11:48:45.874214   14811 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:45.874248   14811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:61:c0:f6:59:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2
	I1010 11:48:45.876601   14811 main.go:141] libmachine: STDOUT: 
	I1010 11:48:45.876621   14811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:45.876652   14811 fix.go:56] duration metric: took 14.661083ms for fixHost
	I1010 11:48:45.876658   14811 start.go:83] releasing machines lock for "default-k8s-diff-port-320000", held for 14.67875ms
	W1010 11:48:45.876665   14811 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:45.876708   14811 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:45.876713   14811 start.go:729] Will try again in 5 seconds ...
	I1010 11:48:50.878862   14811 start.go:360] acquireMachinesLock for default-k8s-diff-port-320000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:50.879261   14811 start.go:364] duration metric: took 309.875µs to acquireMachinesLock for "default-k8s-diff-port-320000"
	I1010 11:48:50.879393   14811 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:48:50.879417   14811 fix.go:54] fixHost starting: 
	I1010 11:48:50.880263   14811 fix.go:112] recreateIfNeeded on default-k8s-diff-port-320000: state=Stopped err=<nil>
	W1010 11:48:50.880291   14811 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:48:50.886700   14811 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-320000" ...
	I1010 11:48:50.889692   14811 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:50.889925   14811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:61:c0:f6:59:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/default-k8s-diff-port-320000/disk.qcow2
	I1010 11:48:50.899880   14811 main.go:141] libmachine: STDOUT: 
	I1010 11:48:50.899951   14811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:50.900043   14811 fix.go:56] duration metric: took 20.626167ms for fixHost
	I1010 11:48:50.900067   14811 start.go:83] releasing machines lock for "default-k8s-diff-port-320000", held for 20.78525ms
	W1010 11:48:50.900294   14811 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-320000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-320000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:50.907718   14811 out.go:201] 
	W1010 11:48:50.911763   14811 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:50.911792   14811 out.go:270] * 
	* 
	W1010 11:48:50.914354   14811 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:48:50.921637   14811 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-320000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (70.980459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-688000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-688000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.190211583s)

                                                
                                                
-- stdout --
	* [newest-cni-688000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-688000" primary control-plane node in "newest-cni-688000" cluster
	* Restarting existing qemu2 VM for "newest-cni-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:47.128772   14826 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:47.128931   14826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:47.128934   14826 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:47.128937   14826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:47.129047   14826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:47.130196   14826 out.go:352] Setting JSON to false
	I1010 11:48:47.147971   14826 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8298,"bootTime":1728577829,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:48:47.148050   14826 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:48:47.153129   14826 out.go:177] * [newest-cni-688000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:48:47.160062   14826 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:48:47.160124   14826 notify.go:220] Checking for updates...
	I1010 11:48:47.167989   14826 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:48:47.171079   14826 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:48:47.172346   14826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:48:47.175041   14826 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:48:47.178065   14826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:48:47.181331   14826 config.go:182] Loaded profile config "newest-cni-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:47.181606   14826 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:48:47.185989   14826 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:48:47.193052   14826 start.go:297] selected driver: qemu2
	I1010 11:48:47.193059   14826 start.go:901] validating driver "qemu2" against &{Name:newest-cni-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:47.193120   14826 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:48:47.195640   14826 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 11:48:47.195666   14826 cni.go:84] Creating CNI manager for ""
	I1010 11:48:47.195687   14826 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:48:47.195707   14826 start.go:340] cluster config:
	{Name:newest-cni-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-688000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:48:47.200217   14826 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:48:47.208033   14826 out.go:177] * Starting "newest-cni-688000" primary control-plane node in "newest-cni-688000" cluster
	I1010 11:48:47.212051   14826 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:48:47.212068   14826 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:48:47.212077   14826 cache.go:56] Caching tarball of preloaded images
	I1010 11:48:47.212143   14826 preload.go:172] Found /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1010 11:48:47.212155   14826 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1010 11:48:47.212224   14826 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/newest-cni-688000/config.json ...
	I1010 11:48:47.212687   14826 start.go:360] acquireMachinesLock for newest-cni-688000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:47.212718   14826 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "newest-cni-688000"
	I1010 11:48:47.212728   14826 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:48:47.212733   14826 fix.go:54] fixHost starting: 
	I1010 11:48:47.212844   14826 fix.go:112] recreateIfNeeded on newest-cni-688000: state=Stopped err=<nil>
	W1010 11:48:47.212851   14826 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:48:47.217217   14826 out.go:177] * Restarting existing qemu2 VM for "newest-cni-688000" ...
	I1010 11:48:47.225102   14826 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:47.225146   14826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:43:35:91:68:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2
	I1010 11:48:47.227305   14826 main.go:141] libmachine: STDOUT: 
	I1010 11:48:47.227323   14826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:47.227355   14826 fix.go:56] duration metric: took 14.620292ms for fixHost
	I1010 11:48:47.227360   14826 start.go:83] releasing machines lock for "newest-cni-688000", held for 14.637792ms
	W1010 11:48:47.227367   14826 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:47.227415   14826 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:47.227420   14826 start.go:729] Will try again in 5 seconds ...
	I1010 11:48:52.229629   14826 start.go:360] acquireMachinesLock for newest-cni-688000: {Name:mkad5ee41eb4e30abbbe655c557b02ef95108a1e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 11:48:52.230277   14826 start.go:364] duration metric: took 541.959µs to acquireMachinesLock for "newest-cni-688000"
	I1010 11:48:52.230410   14826 start.go:96] Skipping create...Using existing machine configuration
	I1010 11:48:52.230431   14826 fix.go:54] fixHost starting: 
	I1010 11:48:52.231268   14826 fix.go:112] recreateIfNeeded on newest-cni-688000: state=Stopped err=<nil>
	W1010 11:48:52.231295   14826 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 11:48:52.238685   14826 out.go:177] * Restarting existing qemu2 VM for "newest-cni-688000" ...
	I1010 11:48:52.242709   14826 qemu.go:418] Using hvf for hardware acceleration
	I1010 11:48:52.242939   14826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:43:35:91:68:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19787-10623/.minikube/machines/newest-cni-688000/disk.qcow2
	I1010 11:48:52.253494   14826 main.go:141] libmachine: STDOUT: 
	I1010 11:48:52.253554   14826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1010 11:48:52.253643   14826 fix.go:56] duration metric: took 23.211875ms for fixHost
	I1010 11:48:52.253662   14826 start.go:83] releasing machines lock for "newest-cni-688000", held for 23.355209ms
	W1010 11:48:52.253820   14826 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1010 11:48:52.261643   14826 out.go:201] 
	W1010 11:48:52.264766   14826 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1010 11:48:52.264804   14826 out.go:270] * 
	* 
	W1010 11:48:52.267593   14826 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:48:52.275751   14826 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-688000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000: exit status 7 (73.031667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-320000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (34.56475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-320000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-320000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-320000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.083834ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-320000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-320000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (33.805833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-320000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (33.751958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-320000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-320000 --alsologtostderr -v=1: exit status 83 (42.780042ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-320000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-320000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:51.213359   14845 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:51.213547   14845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:51.213551   14845 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:51.213553   14845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:51.213673   14845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:51.213892   14845 out.go:352] Setting JSON to false
	I1010 11:48:51.213901   14845 mustload.go:65] Loading cluster: default-k8s-diff-port-320000
	I1010 11:48:51.214116   14845 config.go:182] Loaded profile config "default-k8s-diff-port-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:51.216929   14845 out.go:177] * The control-plane node default-k8s-diff-port-320000 host is not running: state=Stopped
	I1010 11:48:51.219927   14845 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-320000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-320000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (33.389958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-320000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (32.9875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-320000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-688000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000: exit status 7 (35.133583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-688000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-688000 --alsologtostderr -v=1: exit status 83 (47.3685ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-688000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-688000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:48:52.479453   14869 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:48:52.479661   14869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:52.479664   14869 out.go:358] Setting ErrFile to fd 2...
	I1010 11:48:52.479667   14869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:48:52.479796   14869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:48:52.480040   14869 out.go:352] Setting JSON to false
	I1010 11:48:52.480048   14869 mustload.go:65] Loading cluster: newest-cni-688000
	I1010 11:48:52.480283   14869 config.go:182] Loaded profile config "newest-cni-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:48:52.484877   14869 out.go:177] * The control-plane node newest-cni-688000 host is not running: state=Stopped
	I1010 11:48:52.488779   14869 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-688000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-688000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000: exit status 7 (34.5355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-688000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000: exit status 7 (34.204084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                    

Test pass (79/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 8.12
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.5
39 TestErrorSpam/start 0.45
40 TestErrorSpam/status 0.1
41 TestErrorSpam/pause 0.14
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 7.19
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.82
55 TestFunctional/serial/CacheCmd/cache/add_local 1.03
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.25
71 TestFunctional/parallel/DryRun 0.31
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.11
93 TestFunctional/parallel/License 0.23
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.79
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
126 TestFunctional/parallel/ProfileCmd/profile_list 0.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.05
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.51
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.22
193 TestMainNoArgs 0.04
240 TestStoppedBinaryUpgrade/Setup 1.24
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
257 TestNoKubernetes/serial/ProfileList 31.3
258 TestNoKubernetes/serial/Stop 3.66
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
275 TestStartStop/group/old-k8s-version/serial/Stop 3.64
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
288 TestStartStop/group/no-preload/serial/Stop 3.3
291 TestStartStop/group/embed-certs/serial/Stop 3
292 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.8
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
313 TestStartStop/group/newest-cni/serial/Stop 2.12
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.14
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1010 11:22:39.665789   11135 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1010 11:22:39.666144   11135 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-370000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-370000: exit status 85 (100.054875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |          |
	|         | -p download-only-370000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 11:22:21
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 11:22:21.283635   11136 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:22:21.283818   11136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:22:21.283821   11136 out.go:358] Setting ErrFile to fd 2...
	I1010 11:22:21.283824   11136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:22:21.283941   11136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	W1010 11:22:21.284051   11136 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19787-10623/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19787-10623/.minikube/config/config.json: no such file or directory
	I1010 11:22:21.285470   11136 out.go:352] Setting JSON to true
	I1010 11:22:21.303010   11136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6712,"bootTime":1728577829,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:22:21.303086   11136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:22:21.307701   11136 out.go:97] [download-only-370000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:22:21.307857   11136 notify.go:220] Checking for updates...
	W1010 11:22:21.307909   11136 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball: no such file or directory
	I1010 11:22:21.311671   11136 out.go:169] MINIKUBE_LOCATION=19787
	I1010 11:22:21.314641   11136 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:22:21.317643   11136 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:22:21.320640   11136 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:22:21.323622   11136 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	W1010 11:22:21.328656   11136 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1010 11:22:21.328854   11136 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:22:21.331610   11136 out.go:97] Using the qemu2 driver based on user configuration
	I1010 11:22:21.331627   11136 start.go:297] selected driver: qemu2
	I1010 11:22:21.331641   11136 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:22:21.331687   11136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:22:21.334647   11136 out.go:169] Automatically selected the socket_vmnet network
	I1010 11:22:21.341021   11136 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1010 11:22:21.341116   11136 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 11:22:21.341156   11136 cni.go:84] Creating CNI manager for ""
	I1010 11:22:21.341198   11136 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1010 11:22:21.341253   11136 start.go:340] cluster config:
	{Name:download-only-370000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-370000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:22:21.345938   11136 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:22:21.349636   11136 out.go:97] Downloading VM boot image ...
	I1010 11:22:21.349650   11136 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso
	I1010 11:22:30.291956   11136 out.go:97] Starting "download-only-370000" primary control-plane node in "download-only-370000" cluster
	I1010 11:22:30.291987   11136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1010 11:22:30.363056   11136 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1010 11:22:30.363084   11136 cache.go:56] Caching tarball of preloaded images
	I1010 11:22:30.363329   11136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1010 11:22:30.368422   11136 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1010 11:22:30.368430   11136 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1010 11:22:30.465136   11136 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1010 11:22:38.354599   11136 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1010 11:22:38.354766   11136 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1010 11:22:39.048088   11136 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1010 11:22:39.048294   11136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/download-only-370000/config.json ...
	I1010 11:22:39.048310   11136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19787-10623/.minikube/profiles/download-only-370000/config.json: {Name:mk8adaed966bd55990f86cf0fe6964be518521c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 11:22:39.048556   11136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1010 11:22:39.048792   11136 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1010 11:22:39.620260   11136 out.go:193] 
	W1010 11:22:39.623373   11136 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19787-10623/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0 0x10549cfe0] Decompressors:map[bz2:0x140007925b0 gz:0x140007925b8 tar:0x140007924e0 tar.bz2:0x140007924f0 tar.gz:0x14000792540 tar.xz:0x14000792550 tar.zst:0x14000792590 tbz2:0x140007924f0 tgz:0x14000792540 txz:0x14000792550 tzst:0x14000792590 xz:0x140007925c0 zip:0x140007925d0 zst:0x140007925c8] Getters:map[file:0x140014d4550 http:0x14000c865f0 https:0x14000c86640] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1010 11:22:39.623408   11136 out_reason.go:110] 
	W1010 11:22:39.630277   11136 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 11:22:39.634296   11136 out.go:193] 
	
	
	* The control-plane node download-only-370000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-370000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-370000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-988000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-988000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (8.115682209s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1010 11:22:48.163900   11135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1010 11:22:48.163952   11135 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-988000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-988000: exit status 85 (81.085333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
	|         | -p download-only-370000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
	| delete  | -p download-only-370000        | download-only-370000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT | 10 Oct 24 11:22 PDT |
	| start   | -o=json --download-only        | download-only-988000 | jenkins | v1.34.0 | 10 Oct 24 11:22 PDT |                     |
	|         | -p download-only-988000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 11:22:40
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 11:22:40.079236   11163 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:22:40.079400   11163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:22:40.079403   11163 out.go:358] Setting ErrFile to fd 2...
	I1010 11:22:40.079406   11163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:22:40.079510   11163 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:22:40.080695   11163 out.go:352] Setting JSON to true
	I1010 11:22:40.098038   11163 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6731,"bootTime":1728577829,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:22:40.098117   11163 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:22:40.103211   11163 out.go:97] [download-only-988000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:22:40.103333   11163 notify.go:220] Checking for updates...
	I1010 11:22:40.107077   11163 out.go:169] MINIKUBE_LOCATION=19787
	I1010 11:22:40.114102   11163 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:22:40.118101   11163 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:22:40.122134   11163 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:22:40.125196   11163 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	W1010 11:22:40.132121   11163 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1010 11:22:40.132279   11163 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:22:40.136094   11163 out.go:97] Using the qemu2 driver based on user configuration
	I1010 11:22:40.136106   11163 start.go:297] selected driver: qemu2
	I1010 11:22:40.136111   11163 start.go:901] validating driver "qemu2" against <nil>
	I1010 11:22:40.136167   11163 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 11:22:40.140137   11163 out.go:169] Automatically selected the socket_vmnet network
	I1010 11:22:40.146486   11163 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1010 11:22:40.146578   11163 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 11:22:40.146598   11163 cni.go:84] Creating CNI manager for ""
	I1010 11:22:40.146629   11163 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1010 11:22:40.146638   11163 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 11:22:40.146675   11163 start.go:340] cluster config:
	{Name:download-only-988000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-988000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:22:40.151294   11163 iso.go:125] acquiring lock: {Name:mka5fed75c5943f5e917ac5bb6d1a9c386ae795f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 11:22:40.155115   11163 out.go:97] Starting "download-only-988000" primary control-plane node in "download-only-988000" cluster
	I1010 11:22:40.155122   11163 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:22:40.217856   11163 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1010 11:22:40.217868   11163 cache.go:56] Caching tarball of preloaded images
	I1010 11:22:40.218081   11163 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1010 11:22:40.222291   11163 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1010 11:22:40.222299   11163 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1010 11:22:40.302418   11163 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19787-10623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-988000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-988000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-988000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-244000
addons_test.go:935: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-244000: exit status 85 (58.615ms)

                                                
                                                
-- stdout --
	* Profile "addons-244000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-244000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-244000
addons_test.go:946: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-244000: exit status 85 (61.36875ms)

                                                
                                                
-- stdout --
	* Profile "addons-244000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-244000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.5s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1010 11:34:07.462411   11135 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1010 11:34:07.462589   11135 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1010 11:34:09.420552   11135 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1010 11:34:09.420788   11135 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1010 11:34:09.420828   11135 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/001/docker-machine-driver-hyperkit
I1010 11:34:09.925234   11135 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x109a22400 0x109a22400 0x109a22400 0x109a22400 0x109a22400 0x109a22400 0x109a22400] Decompressors:map[bz2:0x14000681110 gz:0x14000681118 tar:0x14000680c60 tar.bz2:0x14000680c70 tar.gz:0x14000680d20 tar.xz:0x14000680d40 tar.zst:0x14000680d50 tbz2:0x14000680c70 tgz:0x14000680d20 txz:0x14000680d40 tzst:0x14000680d50 xz:0x14000681130 zip:0x14000681150 zst:0x14000681138] Getters:map[file:0x14000b85cb0 http:0x1400094d7c0 https:0x1400094d810] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1010 11:34:09.925358   11135 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/001/docker-machine-driver-hyperkit
I1010 11:34:12.688628   11135 install.go:79] stdout: 
W1010 11:34:12.688837   11135 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1010 11:34:12.688873   11135 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/001/docker-machine-driver-hyperkit]
I1010 11:34:12.705596   11135 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/001/docker-machine-driver-hyperkit]
I1010 11:34:12.718855   11135 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/001/docker-machine-driver-hyperkit]
I1010 11:34:12.729831   11135 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate851081455/001/docker-machine-driver-hyperkit]
I1010 11:34:12.751111   11135 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1010 11:34:12.751219   11135 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
--- PASS: TestHyperKitDriverInstallOrUpdate (10.50s)

                                                
                                    
x
+
TestErrorSpam/start (0.45s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 start --dry-run
--- PASS: TestErrorSpam/start (0.45s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 status: exit status 7 (35.296333ms)

                                                
                                                
-- stdout --
	nospam-462000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 status: exit status 7 (32.905208ms)

                                                
                                                
-- stdout --
	nospam-462000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 status: exit status 7 (33.422917ms)

                                                
                                                
-- stdout --
	nospam-462000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 pause: exit status 83 (46.521833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-462000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 pause: exit status 83 (48.979709ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-462000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 pause: exit status 83 (44.706042ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-462000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 unpause: exit status 83 (43.772667ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-462000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 unpause: exit status 83 (41.830333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-462000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 unpause: exit status 83 (40.853083ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-462000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (7.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 stop: (3.353192958s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 stop: (1.785571625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-462000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-462000 stop: (2.048330709s)
--- PASS: TestErrorSpam/stop (7.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19787-10623/.minikube/files/etc/test/nested/copy/11135/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2782842975/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 cache add minikube-local-cache-test:functional-444000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 cache delete minikube-local-cache-test:functional-444000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-444000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 config get cpus: exit status 14 (34.499209ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 config get cpus: exit status 14 (40.554083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-444000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-444000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (166.330625ms)

                                                
                                                
-- stdout --
	* [functional-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:24:26.950301   11716 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:24:26.950506   11716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:24:26.950515   11716 out.go:358] Setting ErrFile to fd 2...
	I1010 11:24:26.950519   11716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:24:26.950693   11716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:24:26.952089   11716 out.go:352] Setting JSON to false
	I1010 11:24:26.973297   11716 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6837,"bootTime":1728577829,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:24:26.973368   11716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:24:26.979005   11716 out.go:177] * [functional-444000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1010 11:24:26.986146   11716 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:24:26.986180   11716 notify.go:220] Checking for updates...
	I1010 11:24:26.993036   11716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:24:26.996023   11716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:24:26.999014   11716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:24:27.002059   11716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:24:27.005035   11716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:24:27.008242   11716 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:24:27.008532   11716 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:24:27.012989   11716 out.go:177] * Using the qemu2 driver based on existing profile
	I1010 11:24:27.019976   11716 start.go:297] selected driver: qemu2
	I1010 11:24:27.019984   11716 start.go:901] validating driver "qemu2" against &{Name:functional-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:24:27.020039   11716 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:24:27.027004   11716 out.go:201] 
	W1010 11:24:27.029937   11716 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1010 11:24:27.033940   11716 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-444000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-444000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-444000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.737709ms)

                                                
                                                
-- stdout --
	* [functional-444000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 11:24:27.210299   11727 out.go:345] Setting OutFile to fd 1 ...
	I1010 11:24:27.210432   11727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:24:27.210435   11727 out.go:358] Setting ErrFile to fd 2...
	I1010 11:24:27.210438   11727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 11:24:27.210552   11727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19787-10623/.minikube/bin
	I1010 11:24:27.211938   11727 out.go:352] Setting JSON to false
	I1010 11:24:27.230009   11727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6838,"bootTime":1728577829,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1010 11:24:27.230087   11727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1010 11:24:27.235034   11727 out.go:177] * [functional-444000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1010 11:24:27.241896   11727 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 11:24:27.241938   11727 notify.go:220] Checking for updates...
	I1010 11:24:27.250056   11727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	I1010 11:24:27.252918   11727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1010 11:24:27.256012   11727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 11:24:27.259011   11727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	I1010 11:24:27.260343   11727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 11:24:27.263291   11727 config.go:182] Loaded profile config "functional-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1010 11:24:27.263578   11727 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 11:24:27.267976   11727 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1010 11:24:27.273026   11727 start.go:297] selected driver: qemu2
	I1010 11:24:27.273032   11727 start.go:901] validating driver "qemu2" against &{Name:functional-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 11:24:27.273092   11727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 11:24:27.280072   11727 out.go:201] 
	W1010 11:24:27.283962   11727 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1010 11:24:27.287991   11727 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.748317959s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-444000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-444000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image rm kicbase/echo-server:functional-444000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-444000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 image save --daemon kicbase/echo-server:functional-444000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-444000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "52.993209ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
I1010 11:23:49.750247   11135 retry.go:31] will retry after 3.058796457s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:1329: Took "39.018667ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "51.419542ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "37.713584ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.014337125s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-444000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-444000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-444000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-444000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.51s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-617000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-617000 --output=json --user=testUser: (3.511473333s)
--- PASS: TestJSONOutput/stop/Command (3.51s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-937000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-937000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (106.976666ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7fc4fec8-e155-4834-98e6-ac9f24cf5d5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-937000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1caa2af0-0959-42ae-bde0-fcdd43f8db11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19787"}}
	{"specversion":"1.0","id":"e030efe0-05cf-42a5-b3d7-7b63b2886f04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig"}}
	{"specversion":"1.0","id":"794ad88a-f5b4-4faa-9f4b-4591e07cc793","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4de1b618-4703-40c3-9898-d0d82971202c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a4610a08-02f4-4124-b56f-541086b8bafa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube"}}
	{"specversion":"1.0","id":"410e9213-d852-4d03-b7b9-2ca7f3347e77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d3d5e86c-fb84-4105-a60c-a67dd8315850","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-937000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-937000
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-202000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-202000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (101.7405ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-202000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19787
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19787-10623/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19787-10623/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-202000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-202000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.621958ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-202000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-202000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.584379292s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.7102915s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-202000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-202000: (3.661438375s)
--- PASS: TestNoKubernetes/serial/Stop (3.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-202000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-202000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (48.737541ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-202000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-202000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-616000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-829000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-829000 --alsologtostderr -v=3: (3.639684041s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-829000 -n old-k8s-version-829000: exit status 7 (50.736917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-829000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-477000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-477000 --alsologtostderr -v=3: (3.304807042s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-509000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-509000 --alsologtostderr -v=3: (2.999349667s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-477000 -n no-preload-477000: exit status 7 (61.404625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-477000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-509000 -n embed-certs-509000: exit status 7 (60.037375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-509000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-320000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-320000 --alsologtostderr -v=3: (3.799530375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-688000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-688000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-688000 --alsologtostderr -v=3: (2.121595583s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-320000 -n default-k8s-diff-port-320000: exit status 7 (66.038583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-320000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-688000 -n newest-cni-688000: exit status 7 (60.7035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-688000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1257000611/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728584629876223000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1257000611/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728584629876223000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1257000611/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728584629876223000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1257000611/001/test-1728584629876223000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (66.352167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:23:49.943151   11135 retry.go:31] will retry after 489.604888ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.882709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:23:50.529979   11135 retry.go:31] will retry after 558.800964ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.237958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:23:51.178401   11135 retry.go:31] will retry after 598.025792ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.770042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:23:51.864603   11135 retry.go:31] will retry after 2.110829645s: exit status 83
I1010 11:23:52.811283   11135 retry.go:31] will retry after 8.727072527s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.849917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:23:54.066695   11135 retry.go:31] will retry after 2.550138936s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (93.539542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:23:56.712646   11135 retry.go:31] will retry after 3.064796666s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (93.853583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo umount -f /mount-9p": exit status 83 (51.447666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-444000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1257000611/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1574262229/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (69.93375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:00.117264   11135 retry.go:31] will retry after 271.090658ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.807708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:00.480564   11135 retry.go:31] will retry after 943.476719ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.300417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:01.512760   11135 retry.go:31] will retry after 747.419737ms: exit status 83
I1010 11:24:01.540452   11135 retry.go:31] will retry after 12.264798374s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.27525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:02.352846   11135 retry.go:31] will retry after 914.229078ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.112875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:03.358562   11135 retry.go:31] will retry after 3.515219186s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.782917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:06.967896   11135 retry.go:31] will retry after 4.314684914s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.102167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "sudo umount -f /mount-9p": exit status 83 (49.741917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-444000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1574262229/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (15.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4076371119/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4076371119/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4076371119/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1: exit status 83 (81.25425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:11.628580   11135 retry.go:31] will retry after 707.975693ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1: exit status 83 (87.863292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:12.426822   11135 retry.go:31] will retry after 742.409974ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1: exit status 83 (88.444875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:13.260020   11135 retry.go:31] will retry after 599.313186ms: exit status 83
I1010 11:24:13.807534   11135 retry.go:31] will retry after 19.571934777s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1: exit status 83 (96.2835ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:13.957982   11135 retry.go:31] will retry after 2.410490948s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1: exit status 83 (92.162208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:16.463100   11135 retry.go:31] will retry after 1.920481951s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1: exit status 83 (95.729166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:18.481614   11135 retry.go:31] will retry after 2.729248711s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1: exit status 83 (92.132125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
I1010 11:24:21.305400   11135 retry.go:31] will retry after 5.084307363s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-444000 ssh "findmnt -T" /mount1: exit status 83 (88.911375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-444000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-444000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4076371119/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4076371119/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-444000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4076371119/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (15.34s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-194000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-194000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-194000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-194000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-194000"

                                                
                                                
----------------------- debugLogs end: cilium-194000 [took: 2.339659375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-194000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-194000
--- SKIP: TestNetworkPlugins/group/cilium (2.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-046000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-046000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard