Test Report: QEMU_macOS 19679

                    
                      7cae0481c1ae024841826a3639f158d099448b48:2024-09-20:36298
                    
                

Test fail (157/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.43
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.26
22 TestOffline 10.24
27 TestAddons/Setup 10.39
28 TestCertOptions 10.44
29 TestCertExpiration 195.65
30 TestDockerFlags 10.34
31 TestForceSystemdFlag 10.34
32 TestForceSystemdEnv 10.19
38 TestErrorSpam/setup 9.92
47 TestFunctional/serial/StartWithProxy 9.89
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 2.19
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.05
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.13
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.27
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.29
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.04
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 108.91
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.31
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 9.88
142 TestMultiControlPlane/serial/DeployApp 108.43
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 54.17
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.24
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 3.38
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 10.01
165 TestJSONOutput/start/Command 9.84
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.22
197 TestMountStart/serial/StartWithMountFirst 10.21
200 TestMultiNode/serial/FreshStart2Nodes 9.93
201 TestMultiNode/serial/DeployApp2Nodes 117.01
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 45.67
209 TestMultiNode/serial/RestartKeepsNodes 8.04
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 2.92
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.06
217 TestPreload 10.01
219 TestScheduledStopUnix 10.13
220 TestSkaffold 12.32
223 TestRunningBinaryUpgrade 589.02
225 TestKubernetesUpgrade 18.83
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.03
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.06
241 TestStoppedBinaryUpgrade/Upgrade 574.15
243 TestPause/serial/Start 9.96
253 TestNoKubernetes/serial/StartWithK8s 10.25
254 TestNoKubernetes/serial/StartWithStopK8s 5.3
255 TestNoKubernetes/serial/Start 5.3
259 TestNoKubernetes/serial/StartNoArgs 5.33
261 TestNetworkPlugins/group/auto/Start 9.9
262 TestNetworkPlugins/group/kindnet/Start 9.79
263 TestNetworkPlugins/group/calico/Start 9.95
264 TestNetworkPlugins/group/custom-flannel/Start 9.76
265 TestNetworkPlugins/group/false/Start 9.86
266 TestNetworkPlugins/group/enable-default-cni/Start 9.91
267 TestNetworkPlugins/group/flannel/Start 9.81
268 TestNetworkPlugins/group/bridge/Start 9.82
269 TestNetworkPlugins/group/kubenet/Start 9.72
271 TestStartStop/group/old-k8s-version/serial/FirstStart 9.94
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.95
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
288 TestStartStop/group/no-preload/serial/SecondStart 5.25
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/no-preload/serial/Pause 0.1
294 TestStartStop/group/embed-certs/serial/FirstStart 10
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.95
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
304 TestStartStop/group/embed-certs/serial/SecondStart 5.26
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/embed-certs/serial/Pause 0.1
312 TestStartStop/group/newest-cni/serial/FirstStart 9.9
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.26
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (13.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-134000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-134000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.429316875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7659e59c-29c2-47cb-b01c-ae4db3365a98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-134000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d895fec3-c2bb-4d51-a288-e592129bab61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"4951a4e1-c12e-470a-a1aa-8c957ac8bc6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig"}}
	{"specversion":"1.0","id":"4c33f28a-55f3-4dde-94c9-0abe7c00c2a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4c360396-3243-48ef-9b29-ca053aba483c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a7fa75c3-5855-4d82-b8ee-a6f33cc678da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube"}}
	{"specversion":"1.0","id":"8bf305a1-f96a-4e52-99a3-18a57e2beca3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"7c3503b1-f5bc-42cb-9300-2b3d464937aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c4df7ed-873c-49cc-8a43-126e69baa9ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"68c55278-7b73-4f4c-8c87-90210dbede69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa14426a-b34d-46f7-8420-6fb5c22a1c69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-134000\" primary control-plane node in \"download-only-134000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"73fdf45c-aebf-4697-a3b5-d45ede050081","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9700442-d2c5-445c-90b8-f91d4602a094","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0] Decompressors:map[bz2:0x14000592880 gz:0x14000592888 tar:0x140005927f0 tar.bz2:0x14000592820 tar.gz:0x14000592850 tar.xz:0x14000592860 tar.zst:0x14000592870 tbz2:0x14000592820 tgz:0x14
000592850 txz:0x14000592860 tzst:0x14000592870 xz:0x14000592890 zip:0x140005928a0 zst:0x14000592898] Getters:map[file:0x14001b54570 http:0x1400017b130 https:0x1400017b5e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"d709157e-e728-4530-bca3-c9eb0a8db59c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:32:04.183543    7280 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:32:04.183709    7280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:04.183712    7280 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:04.183715    7280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:04.183863    7280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	W0920 10:32:04.183953    7280 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19679-6783/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19679-6783/.minikube/config/config.json: no such file or directory
	I0920 10:32:04.185170    7280 out.go:352] Setting JSON to true
	I0920 10:32:04.203281    7280 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5495,"bootTime":1726848029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:32:04.203360    7280 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:32:04.207216    7280 out.go:97] [download-only-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:32:04.207353    7280 notify.go:220] Checking for updates...
	W0920 10:32:04.207407    7280 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 10:32:04.212223    7280 out.go:169] MINIKUBE_LOCATION=19679
	I0920 10:32:04.215662    7280 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:32:04.223274    7280 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:32:04.226226    7280 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:32:04.229175    7280 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	W0920 10:32:04.235225    7280 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 10:32:04.235439    7280 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:32:04.236968    7280 out.go:97] Using the qemu2 driver based on user configuration
	I0920 10:32:04.236985    7280 start.go:297] selected driver: qemu2
	I0920 10:32:04.236989    7280 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:32:04.237055    7280 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:32:04.240222    7280 out.go:169] Automatically selected the socket_vmnet network
	I0920 10:32:04.247748    7280 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 10:32:04.247852    7280 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:32:04.247910    7280 cni.go:84] Creating CNI manager for ""
	I0920 10:32:04.247952    7280 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:32:04.248026    7280 start.go:340] cluster config:
	{Name:download-only-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:32:04.251735    7280 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:32:04.255167    7280 out.go:97] Downloading VM boot image ...
	I0920 10:32:04.255185    7280 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0920 10:32:10.216843    7280 out.go:97] Starting "download-only-134000" primary control-plane node in "download-only-134000" cluster
	I0920 10:32:10.216868    7280 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:32:10.284434    7280 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:32:10.284444    7280 cache.go:56] Caching tarball of preloaded images
	I0920 10:32:10.285296    7280 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:32:10.289584    7280 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 10:32:10.289593    7280 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:32:10.380185    7280 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:32:16.243773    7280 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:32:16.243936    7280 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:32:16.939270    7280 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:32:16.939489    7280 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/download-only-134000/config.json ...
	I0920 10:32:16.939507    7280 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/download-only-134000/config.json: {Name:mk71334cad23d68a51beaafabf79bfa6a982dcb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:32:16.939744    7280 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:32:16.939934    7280 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0920 10:32:17.531795    7280 out.go:193] 
	W0920 10:32:17.535832    7280 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0] Decompressors:map[bz2:0x14000592880 gz:0x14000592888 tar:0x140005927f0 tar.bz2:0x14000592820 tar.gz:0x14000592850 tar.xz:0x14000592860 tar.zst:0x14000592870 tbz2:0x14000592820 tgz:0x14000592850 txz:0x14000592860 tzst:0x14000592870 xz:0x14000592890 zip:0x140005928a0 zst:0x14000592898] Getters:map[file:0x14001b54570 http:0x1400017b130 https:0x1400017b5e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0920 10:32:17.535860    7280 out_reason.go:110] 
	W0920 10:32:17.544737    7280 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:32:17.547692    7280 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-134000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.26s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 10:32:25.015123    7279 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-534000 --alsologtostderr --binary-mirror http://127.0.0.1:51059 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-534000 --alsologtostderr --binary-mirror http://127.0.0.1:51059 --driver=qemu2 : exit status 40 (157.441042ms)

                                                
                                                
-- stdout --
	* [binary-mirror-534000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-534000" primary control-plane node in "binary-mirror-534000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:32:25.074998    7341 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:32:25.075122    7341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:25.075126    7341 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:25.075131    7341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:25.075264    7341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:32:25.076381    7341 out.go:352] Setting JSON to false
	I0920 10:32:25.092519    7341 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5516,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:32:25.092597    7341 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:32:25.097665    7341 out.go:177] * [binary-mirror-534000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:32:25.105610    7341 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:32:25.105674    7341 notify.go:220] Checking for updates...
	I0920 10:32:25.113557    7341 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:32:25.116605    7341 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:32:25.119597    7341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:32:25.123630    7341 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:32:25.126763    7341 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:32:25.130560    7341 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:32:25.136539    7341 start.go:297] selected driver: qemu2
	I0920 10:32:25.136545    7341 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:32:25.136610    7341 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:32:25.139584    7341 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:32:25.145728    7341 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 10:32:25.145828    7341 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:32:25.145849    7341 cni.go:84] Creating CNI manager for ""
	I0920 10:32:25.145874    7341 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:32:25.145880    7341 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:32:25.145929    7341 start.go:340] cluster config:
	{Name:binary-mirror-534000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:51059 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:32:25.149456    7341 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:32:25.156561    7341 out.go:177] * Starting "binary-mirror-534000" primary control-plane node in "binary-mirror-534000" cluster
	I0920 10:32:25.160607    7341 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:32:25.160627    7341 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:32:25.160640    7341 cache.go:56] Caching tarball of preloaded images
	I0920 10:32:25.160722    7341 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:32:25.160728    7341 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:32:25.160951    7341 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/binary-mirror-534000/config.json ...
	I0920 10:32:25.160963    7341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/binary-mirror-534000/config.json: {Name:mkdaf466c04aa3c44439837a6031b9748fd5b7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:32:25.161323    7341 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:32:25.161388    7341 download.go:107] Downloading: http://127.0.0.1:51059/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51059/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0920 10:32:25.177464    7341 out.go:201] 
	W0920 10:32:25.181571    7341 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:51059/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51059/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:51059/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51059/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1096696c0 0x1096696c0 0x1096696c0 0x1096696c0 0x1096696c0 0x1096696c0 0x1096696c0] Decompressors:map[bz2:0x1400081b710 gz:0x1400081b718 tar:0x1400081b6c0 tar.bz2:0x1400081b6d0 tar.gz:0x1400081b6e0 tar.xz:0x1400081b6f0 tar.zst:0x1400081b700 tbz2:0x1400081b6d0 tgz:0x1400081b6e0 txz:0x1400081b6f0 tzst:0x1400081b700 xz:0x1400081b720 zip:0x1400081b730 zst:0x1400081b728] Getters:map[file:0x1400098c130 http:0x14000983540 https:0x14000983590] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:51059/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51059/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:51059/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:51059/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1096696c0 0x1096696c0 0x1096696c0 0x1096696c0 0x1096696c0 0x1096696c0 0x1096696c0] Decompressors:map[bz2:0x1400081b710 gz:0x1400081b718 tar:0x1400081b6c0 tar.bz2:0x1400081b6d0 tar.gz:0x1400081b6e0 tar.xz:0x1400081b6f0 tar.zst:0x1400081b700 tbz2:0x1400081b6d0 tgz:0x1400081b6e0 txz:0x1400081b6f0 tzst:0x1400081b700 xz:0x1400081b720 zip:0x1400081b730 zst:0x1400081b728] Getters:map[file:0x1400098c130 http:0x14000983540 https:0x14000983590] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0920 10:32:25.181577    7341 out.go:270] * 
	* 
	W0920 10:32:25.182066    7341 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:32:25.194591    7341 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-534000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:51059" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-534000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-534000
--- FAIL: TestBinaryMirror (0.26s)

                                                
                                    
x
+
TestOffline (10.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-520000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-520000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.126868542s)

                                                
                                                
-- stdout --
	* [offline-docker-520000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-520000" primary control-plane node in "offline-docker-520000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-520000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:06.959728    8677 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:06.959876    8677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:06.959883    8677 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:06.959885    8677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:06.960035    8677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:44:06.961315    8677 out.go:352] Setting JSON to false
	I0920 10:44:06.978842    8677 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6217,"bootTime":1726848029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:44:06.978915    8677 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:44:06.984399    8677 out.go:177] * [offline-docker-520000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:44:06.991529    8677 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:44:06.991552    8677 notify.go:220] Checking for updates...
	I0920 10:44:06.998457    8677 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:44:07.001423    8677 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:44:07.004450    8677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:44:07.007435    8677 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:44:07.010497    8677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:44:07.013739    8677 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:07.013789    8677 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:44:07.017438    8677 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:44:07.024441    8677 start.go:297] selected driver: qemu2
	I0920 10:44:07.024450    8677 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:44:07.024458    8677 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:44:07.026359    8677 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:44:07.029408    8677 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:44:07.032505    8677 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:44:07.032524    8677 cni.go:84] Creating CNI manager for ""
	I0920 10:44:07.032545    8677 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:44:07.032549    8677 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:44:07.032587    8677 start.go:340] cluster config:
	{Name:offline-docker-520000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:44:07.036537    8677 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:44:07.044449    8677 out.go:177] * Starting "offline-docker-520000" primary control-plane node in "offline-docker-520000" cluster
	I0920 10:44:07.047388    8677 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:44:07.047417    8677 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:44:07.047426    8677 cache.go:56] Caching tarball of preloaded images
	I0920 10:44:07.047497    8677 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:44:07.047502    8677 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:44:07.047570    8677 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/offline-docker-520000/config.json ...
	I0920 10:44:07.047580    8677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/offline-docker-520000/config.json: {Name:mk554a29ecb50f89e4c0427cf42a3080329e2a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:44:07.047799    8677 start.go:360] acquireMachinesLock for offline-docker-520000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:07.047829    8677 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "offline-docker-520000"
	I0920 10:44:07.047841    8677 start.go:93] Provisioning new machine with config: &{Name:offline-docker-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:44:07.047865    8677 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:44:07.055394    8677 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:44:07.071617    8677 start.go:159] libmachine.API.Create for "offline-docker-520000" (driver="qemu2")
	I0920 10:44:07.071647    8677 client.go:168] LocalClient.Create starting
	I0920 10:44:07.071743    8677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:44:07.071779    8677 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:07.071787    8677 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:07.071842    8677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:44:07.071865    8677 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:07.071877    8677 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:07.072268    8677 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:44:07.233638    8677 main.go:141] libmachine: Creating SSH key...
	I0920 10:44:07.572978    8677 main.go:141] libmachine: Creating Disk image...
	I0920 10:44:07.572988    8677 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:44:07.573288    8677 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2
	I0920 10:44:07.583041    8677 main.go:141] libmachine: STDOUT: 
	I0920 10:44:07.583068    8677 main.go:141] libmachine: STDERR: 
	I0920 10:44:07.583152    8677 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2 +20000M
	I0920 10:44:07.591963    8677 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:44:07.591982    8677 main.go:141] libmachine: STDERR: 
	I0920 10:44:07.591998    8677 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2
	I0920 10:44:07.592009    8677 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:44:07.592026    8677 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:07.592062    8677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:01:48:dd:d1:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2
	I0920 10:44:07.593980    8677 main.go:141] libmachine: STDOUT: 
	I0920 10:44:07.593995    8677 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:07.594017    8677 client.go:171] duration metric: took 522.363125ms to LocalClient.Create
	I0920 10:44:09.596089    8677 start.go:128] duration metric: took 2.5482255s to createHost
	I0920 10:44:09.596125    8677 start.go:83] releasing machines lock for "offline-docker-520000", held for 2.548301584s
	W0920 10:44:09.596142    8677 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:09.607471    8677 out.go:177] * Deleting "offline-docker-520000" in qemu2 ...
	W0920 10:44:09.626834    8677 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:09.626848    8677 start.go:729] Will try again in 5 seconds ...
	I0920 10:44:14.629083    8677 start.go:360] acquireMachinesLock for offline-docker-520000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:14.629598    8677 start.go:364] duration metric: took 404.833µs to acquireMachinesLock for "offline-docker-520000"
	I0920 10:44:14.629747    8677 start.go:93] Provisioning new machine with config: &{Name:offline-docker-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:44:14.630116    8677 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:44:14.644795    8677 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:44:14.695875    8677 start.go:159] libmachine.API.Create for "offline-docker-520000" (driver="qemu2")
	I0920 10:44:14.695925    8677 client.go:168] LocalClient.Create starting
	I0920 10:44:14.696048    8677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:44:14.696108    8677 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:14.696125    8677 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:14.696198    8677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:44:14.696243    8677 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:14.696258    8677 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:14.696781    8677 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:44:14.875021    8677 main.go:141] libmachine: Creating SSH key...
	I0920 10:44:14.997609    8677 main.go:141] libmachine: Creating Disk image...
	I0920 10:44:14.997621    8677 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:44:14.997824    8677 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2
	I0920 10:44:15.006809    8677 main.go:141] libmachine: STDOUT: 
	I0920 10:44:15.006831    8677 main.go:141] libmachine: STDERR: 
	I0920 10:44:15.006900    8677 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2 +20000M
	I0920 10:44:15.014666    8677 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:44:15.014679    8677 main.go:141] libmachine: STDERR: 
	I0920 10:44:15.014696    8677 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2
	I0920 10:44:15.014703    8677 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:44:15.014711    8677 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:15.014745    8677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:3d:92:f5:b1:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/offline-docker-520000/disk.qcow2
	I0920 10:44:15.016255    8677 main.go:141] libmachine: STDOUT: 
	I0920 10:44:15.016272    8677 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:15.016290    8677 client.go:171] duration metric: took 320.36075ms to LocalClient.Create
	I0920 10:44:17.018423    8677 start.go:128] duration metric: took 2.38829s to createHost
	I0920 10:44:17.018466    8677 start.go:83] releasing machines lock for "offline-docker-520000", held for 2.388857917s
	W0920 10:44:17.018578    8677 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:17.026492    8677 out.go:201] 
	W0920 10:44:17.035604    8677 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:44:17.035611    8677 out.go:270] * 
	* 
	W0920 10:44:17.036294    8677 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:44:17.046518    8677 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-520000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-20 10:44:17.055987 -0700 PDT m=+732.912035959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-520000 -n offline-docker-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-520000 -n offline-docker-520000: exit status 7 (32.342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-520000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-520000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-520000
--- FAIL: TestOffline (10.24s)

                                                
                                    
x
+
TestAddons/Setup (10.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-927000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-927000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.385689834s)

                                                
                                                
-- stdout --
	* [addons-927000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-927000" primary control-plane node in "addons-927000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-927000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:32:25.368231    7355 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:32:25.368373    7355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:25.368377    7355 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:25.368379    7355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:25.368501    7355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:32:25.369652    7355 out.go:352] Setting JSON to false
	I0920 10:32:25.385888    7355 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5516,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:32:25.385948    7355 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:32:25.390592    7355 out.go:177] * [addons-927000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:32:25.397606    7355 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:32:25.397649    7355 notify.go:220] Checking for updates...
	I0920 10:32:25.404583    7355 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:32:25.407657    7355 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:32:25.410543    7355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:32:25.413577    7355 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:32:25.416596    7355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:32:25.419760    7355 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:32:25.423576    7355 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:32:25.430527    7355 start.go:297] selected driver: qemu2
	I0920 10:32:25.430533    7355 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:32:25.430539    7355 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:32:25.432960    7355 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:32:25.435606    7355 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:32:25.439718    7355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:32:25.439748    7355 cni.go:84] Creating CNI manager for ""
	I0920 10:32:25.439773    7355 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:32:25.439777    7355 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:32:25.439813    7355 start.go:340] cluster config:
	{Name:addons-927000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-927000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:32:25.443560    7355 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:32:25.451553    7355 out.go:177] * Starting "addons-927000" primary control-plane node in "addons-927000" cluster
	I0920 10:32:25.454556    7355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:32:25.454588    7355 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:32:25.454600    7355 cache.go:56] Caching tarball of preloaded images
	I0920 10:32:25.454688    7355 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:32:25.454695    7355 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:32:25.454898    7355 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/addons-927000/config.json ...
	I0920 10:32:25.454909    7355 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/addons-927000/config.json: {Name:mk871a2306184f2ce0680357a367372c47c5d770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:32:25.455449    7355 start.go:360] acquireMachinesLock for addons-927000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:32:25.455520    7355 start.go:364] duration metric: took 65.292µs to acquireMachinesLock for "addons-927000"
	I0920 10:32:25.455533    7355 start.go:93] Provisioning new machine with config: &{Name:addons-927000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-927000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:32:25.455559    7355 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:32:25.462523    7355 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 10:32:25.481113    7355 start.go:159] libmachine.API.Create for "addons-927000" (driver="qemu2")
	I0920 10:32:25.481163    7355 client.go:168] LocalClient.Create starting
	I0920 10:32:25.481321    7355 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:32:25.684862    7355 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:32:25.828987    7355 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:32:26.101356    7355 main.go:141] libmachine: Creating SSH key...
	I0920 10:32:26.284506    7355 main.go:141] libmachine: Creating Disk image...
	I0920 10:32:26.284516    7355 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:32:26.284767    7355 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2
	I0920 10:32:26.294351    7355 main.go:141] libmachine: STDOUT: 
	I0920 10:32:26.294376    7355 main.go:141] libmachine: STDERR: 
	I0920 10:32:26.294442    7355 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2 +20000M
	I0920 10:32:26.302272    7355 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:32:26.302288    7355 main.go:141] libmachine: STDERR: 
	I0920 10:32:26.302303    7355 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2
	I0920 10:32:26.302308    7355 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:32:26.302346    7355 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:32:26.302379    7355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:55:e5:94:d8:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2
	I0920 10:32:26.304010    7355 main.go:141] libmachine: STDOUT: 
	I0920 10:32:26.304025    7355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:32:26.304054    7355 client.go:171] duration metric: took 822.886208ms to LocalClient.Create
	I0920 10:32:28.306206    7355 start.go:128] duration metric: took 2.850663292s to createHost
	I0920 10:32:28.306261    7355 start.go:83] releasing machines lock for "addons-927000", held for 2.850768708s
	W0920 10:32:28.306381    7355 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:32:28.319521    7355 out.go:177] * Deleting "addons-927000" in qemu2 ...
	W0920 10:32:28.352014    7355 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:32:28.352029    7355 start.go:729] Will try again in 5 seconds ...
	I0920 10:32:33.354209    7355 start.go:360] acquireMachinesLock for addons-927000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:32:33.354667    7355 start.go:364] duration metric: took 357.292µs to acquireMachinesLock for "addons-927000"
	I0920 10:32:33.354774    7355 start.go:93] Provisioning new machine with config: &{Name:addons-927000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-927000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:32:33.355037    7355 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:32:33.374658    7355 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 10:32:33.425857    7355 start.go:159] libmachine.API.Create for "addons-927000" (driver="qemu2")
	I0920 10:32:33.425895    7355 client.go:168] LocalClient.Create starting
	I0920 10:32:33.426029    7355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:32:33.426097    7355 main.go:141] libmachine: Decoding PEM data...
	I0920 10:32:33.426116    7355 main.go:141] libmachine: Parsing certificate...
	I0920 10:32:33.426202    7355 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:32:33.426252    7355 main.go:141] libmachine: Decoding PEM data...
	I0920 10:32:33.426264    7355 main.go:141] libmachine: Parsing certificate...
	I0920 10:32:33.426812    7355 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:32:33.599979    7355 main.go:141] libmachine: Creating SSH key...
	I0920 10:32:33.656175    7355 main.go:141] libmachine: Creating Disk image...
	I0920 10:32:33.656181    7355 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:32:33.656388    7355 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2
	I0920 10:32:33.665506    7355 main.go:141] libmachine: STDOUT: 
	I0920 10:32:33.665545    7355 main.go:141] libmachine: STDERR: 
	I0920 10:32:33.665620    7355 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2 +20000M
	I0920 10:32:33.673406    7355 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:32:33.673433    7355 main.go:141] libmachine: STDERR: 
	I0920 10:32:33.673454    7355 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2
	I0920 10:32:33.673459    7355 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:32:33.673466    7355 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:32:33.673496    7355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:8b:44:3e:b1:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/addons-927000/disk.qcow2
	I0920 10:32:33.675134    7355 main.go:141] libmachine: STDOUT: 
	I0920 10:32:33.675155    7355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:32:33.675170    7355 client.go:171] duration metric: took 249.273583ms to LocalClient.Create
	I0920 10:32:35.677450    7355 start.go:128] duration metric: took 2.322366417s to createHost
	I0920 10:32:35.677539    7355 start.go:83] releasing machines lock for "addons-927000", held for 2.322875625s
	W0920 10:32:35.677917    7355 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-927000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-927000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:32:35.687368    7355 out.go:201] 
	W0920 10:32:35.697575    7355 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:32:35.697606    7355 out.go:270] * 
	* 
	W0920 10:32:35.700180    7355 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:32:35.710269    7355 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-927000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.39s)

                                                
                                    
x
+
TestCertOptions (10.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-683000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-683000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.169603958s)

                                                
                                                
-- stdout --
	* [cert-options-683000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-683000" primary control-plane node in "cert-options-683000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-683000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-683000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-683000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-683000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-683000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (77.55875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-683000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-683000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-683000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-683000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-683000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-683000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.725875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-683000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-683000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-683000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-683000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-683000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-20 10:44:48.02504 -0700 PDT m=+763.881203209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-683000 -n cert-options-683000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-683000 -n cert-options-683000: exit status 7 (30.826458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-683000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-683000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-683000
--- FAIL: TestCertOptions (10.44s)

                                                
                                    
x
+
TestCertExpiration (195.65s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-196000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-196000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.259951667s)

                                                
                                                
-- stdout --
	* [cert-expiration-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-196000" primary control-plane node in "cert-expiration-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-196000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-196000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-196000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.232790667s)

                                                
                                                
-- stdout --
	* [cert-expiration-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-196000" primary control-plane node in "cert-expiration-196000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-196000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-196000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-196000" primary control-plane node in "cert-expiration-196000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-196000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-196000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-20 10:47:47.994245 -0700 PDT m=+943.851071334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-196000 -n cert-expiration-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-196000 -n cert-expiration-196000: exit status 7 (66.930458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-196000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-196000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-196000
--- FAIL: TestCertExpiration (195.65s)

                                                
                                    
x
+
TestDockerFlags (10.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-211000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-211000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.113589291s)

                                                
                                                
-- stdout --
	* [docker-flags-211000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-211000" primary control-plane node in "docker-flags-211000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-211000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:27.387157    8864 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:27.387302    8864 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:27.387305    8864 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:27.387308    8864 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:27.387443    8864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:44:27.388531    8864 out.go:352] Setting JSON to false
	I0920 10:44:27.404507    8864 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6238,"bootTime":1726848029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:44:27.404568    8864 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:44:27.412419    8864 out.go:177] * [docker-flags-211000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:44:27.417895    8864 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:44:27.417968    8864 notify.go:220] Checking for updates...
	I0920 10:44:27.426304    8864 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:44:27.429213    8864 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:44:27.433224    8864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:44:27.436264    8864 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:44:27.439156    8864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:44:27.442604    8864 config.go:182] Loaded profile config "force-systemd-flag-999000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:27.442672    8864 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:27.442715    8864 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:44:27.447213    8864 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:44:27.454279    8864 start.go:297] selected driver: qemu2
	I0920 10:44:27.454287    8864 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:44:27.454295    8864 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:44:27.456650    8864 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:44:27.460315    8864 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:44:27.461913    8864 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0920 10:44:27.461944    8864 cni.go:84] Creating CNI manager for ""
	I0920 10:44:27.461971    8864 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:44:27.461979    8864 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:44:27.462014    8864 start.go:340] cluster config:
	{Name:docker-flags-211000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:44:27.465612    8864 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:44:27.474300    8864 out.go:177] * Starting "docker-flags-211000" primary control-plane node in "docker-flags-211000" cluster
	I0920 10:44:27.478225    8864 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:44:27.478247    8864 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:44:27.478252    8864 cache.go:56] Caching tarball of preloaded images
	I0920 10:44:27.478319    8864 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:44:27.478325    8864 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:44:27.478393    8864 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/docker-flags-211000/config.json ...
	I0920 10:44:27.478405    8864 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/docker-flags-211000/config.json: {Name:mk5d404bbd3fa9fa2309eab4fe6af7915261190c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:44:27.478637    8864 start.go:360] acquireMachinesLock for docker-flags-211000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:27.478674    8864 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "docker-flags-211000"
	I0920 10:44:27.478688    8864 start.go:93] Provisioning new machine with config: &{Name:docker-flags-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:44:27.478716    8864 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:44:27.487241    8864 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:44:27.505961    8864 start.go:159] libmachine.API.Create for "docker-flags-211000" (driver="qemu2")
	I0920 10:44:27.505990    8864 client.go:168] LocalClient.Create starting
	I0920 10:44:27.506060    8864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:44:27.506090    8864 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:27.506103    8864 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:27.506158    8864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:44:27.506182    8864 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:27.506193    8864 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:27.506610    8864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:44:27.670231    8864 main.go:141] libmachine: Creating SSH key...
	I0920 10:44:27.847059    8864 main.go:141] libmachine: Creating Disk image...
	I0920 10:44:27.847066    8864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:44:27.847282    8864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2
	I0920 10:44:27.856663    8864 main.go:141] libmachine: STDOUT: 
	I0920 10:44:27.856683    8864 main.go:141] libmachine: STDERR: 
	I0920 10:44:27.856743    8864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2 +20000M
	I0920 10:44:27.864700    8864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:44:27.864722    8864 main.go:141] libmachine: STDERR: 
	I0920 10:44:27.864740    8864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2
	I0920 10:44:27.864747    8864 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:44:27.864758    8864 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:27.864788    8864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:e2:61:e1:59:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2
	I0920 10:44:27.866405    8864 main.go:141] libmachine: STDOUT: 
	I0920 10:44:27.866420    8864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:27.866441    8864 client.go:171] duration metric: took 360.444167ms to LocalClient.Create
	I0920 10:44:29.868602    8864 start.go:128] duration metric: took 2.389875959s to createHost
	I0920 10:44:29.868664    8864 start.go:83] releasing machines lock for "docker-flags-211000", held for 2.389986708s
	W0920 10:44:29.868740    8864 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:29.898859    8864 out.go:177] * Deleting "docker-flags-211000" in qemu2 ...
	W0920 10:44:29.924396    8864 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:29.924412    8864 start.go:729] Will try again in 5 seconds ...
	I0920 10:44:34.926618    8864 start.go:360] acquireMachinesLock for docker-flags-211000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:34.933485    8864 start.go:364] duration metric: took 6.739833ms to acquireMachinesLock for "docker-flags-211000"
	I0920 10:44:34.933657    8864 start.go:93] Provisioning new machine with config: &{Name:docker-flags-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:44:34.933942    8864 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:44:34.948533    8864 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:44:35.000313    8864 start.go:159] libmachine.API.Create for "docker-flags-211000" (driver="qemu2")
	I0920 10:44:35.000368    8864 client.go:168] LocalClient.Create starting
	I0920 10:44:35.000486    8864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:44:35.000541    8864 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:35.000555    8864 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:35.000621    8864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:44:35.000668    8864 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:35.000679    8864 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:35.001290    8864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:44:35.224774    8864 main.go:141] libmachine: Creating SSH key...
	I0920 10:44:35.400519    8864 main.go:141] libmachine: Creating Disk image...
	I0920 10:44:35.400529    8864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:44:35.400750    8864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2
	I0920 10:44:35.410347    8864 main.go:141] libmachine: STDOUT: 
	I0920 10:44:35.410361    8864 main.go:141] libmachine: STDERR: 
	I0920 10:44:35.410428    8864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2 +20000M
	I0920 10:44:35.418235    8864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:44:35.418248    8864 main.go:141] libmachine: STDERR: 
	I0920 10:44:35.418259    8864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2
	I0920 10:44:35.418265    8864 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:44:35.418273    8864 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:35.418312    8864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:63:48:8a:c4:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/docker-flags-211000/disk.qcow2
	I0920 10:44:35.419879    8864 main.go:141] libmachine: STDOUT: 
	I0920 10:44:35.419894    8864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:35.419909    8864 client.go:171] duration metric: took 419.537083ms to LocalClient.Create
	I0920 10:44:37.422298    8864 start.go:128] duration metric: took 2.488205167s to createHost
	I0920 10:44:37.422394    8864 start.go:83] releasing machines lock for "docker-flags-211000", held for 2.488874875s
	W0920 10:44:37.422767    8864 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-211000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-211000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:37.441941    8864 out.go:201] 
	W0920 10:44:37.448754    8864 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:44:37.448798    8864 out.go:270] * 
	* 
	W0920 10:44:37.451591    8864 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:44:37.459690    8864 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-211000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-211000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-211000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (75.670625ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-211000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-211000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-211000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-211000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-211000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-211000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-211000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-211000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-211000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.641334ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-211000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-211000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-211000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-211000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-211000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-211000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-20 10:44:37.596636 -0700 PDT m=+753.452761167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-211000 -n docker-flags-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-211000 -n docker-flags-211000: exit status 7 (29.41175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-211000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-211000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-211000
--- FAIL: TestDockerFlags (10.34s)

                                                
                                    
x
+
TestForceSystemdFlag (10.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-999000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-999000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.154482791s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-999000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-999000" primary control-plane node in "force-systemd-flag-999000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-999000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:22.181114    8843 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:22.181250    8843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:22.181254    8843 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:22.181256    8843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:22.181377    8843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:44:22.182454    8843 out.go:352] Setting JSON to false
	I0920 10:44:22.198332    8843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6233,"bootTime":1726848029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:44:22.198449    8843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:44:22.206017    8843 out.go:177] * [force-systemd-flag-999000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:44:22.223232    8843 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:44:22.223246    8843 notify.go:220] Checking for updates...
	I0920 10:44:22.232996    8843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:44:22.237009    8843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:44:22.239907    8843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:44:22.242997    8843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:44:22.246029    8843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:44:22.247870    8843 config.go:182] Loaded profile config "force-systemd-env-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:22.247949    8843 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:22.247999    8843 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:44:22.252018    8843 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:44:22.258841    8843 start.go:297] selected driver: qemu2
	I0920 10:44:22.258849    8843 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:44:22.258856    8843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:44:22.261218    8843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:44:22.265094    8843 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:44:22.268149    8843 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:44:22.268167    8843 cni.go:84] Creating CNI manager for ""
	I0920 10:44:22.268200    8843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:44:22.268204    8843 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:44:22.268241    8843 start.go:340] cluster config:
	{Name:force-systemd-flag-999000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-999000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:44:22.271820    8843 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:44:22.279990    8843 out.go:177] * Starting "force-systemd-flag-999000" primary control-plane node in "force-systemd-flag-999000" cluster
	I0920 10:44:22.284028    8843 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:44:22.284046    8843 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:44:22.284053    8843 cache.go:56] Caching tarball of preloaded images
	I0920 10:44:22.284126    8843 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:44:22.284133    8843 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:44:22.284203    8843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/force-systemd-flag-999000/config.json ...
	I0920 10:44:22.284221    8843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/force-systemd-flag-999000/config.json: {Name:mk4a277dcf15f290f00d6e7acebd2b7a8e2b18e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:44:22.284449    8843 start.go:360] acquireMachinesLock for force-systemd-flag-999000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:22.284488    8843 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "force-systemd-flag-999000"
	I0920 10:44:22.284503    8843 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-999000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-999000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:44:22.284535    8843 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:44:22.293035    8843 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:44:22.311212    8843 start.go:159] libmachine.API.Create for "force-systemd-flag-999000" (driver="qemu2")
	I0920 10:44:22.311242    8843 client.go:168] LocalClient.Create starting
	I0920 10:44:22.311311    8843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:44:22.311350    8843 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:22.311361    8843 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:22.311400    8843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:44:22.311425    8843 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:22.311434    8843 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:22.311780    8843 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:44:22.476403    8843 main.go:141] libmachine: Creating SSH key...
	I0920 10:44:22.526558    8843 main.go:141] libmachine: Creating Disk image...
	I0920 10:44:22.526585    8843 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:44:22.526781    8843 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2
	I0920 10:44:22.536086    8843 main.go:141] libmachine: STDOUT: 
	I0920 10:44:22.536109    8843 main.go:141] libmachine: STDERR: 
	I0920 10:44:22.536180    8843 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2 +20000M
	I0920 10:44:22.543927    8843 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:44:22.543943    8843 main.go:141] libmachine: STDERR: 
	I0920 10:44:22.543961    8843 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2
	I0920 10:44:22.543969    8843 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:44:22.543984    8843 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:22.544010    8843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:be:d2:15:da:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2
	I0920 10:44:22.545672    8843 main.go:141] libmachine: STDOUT: 
	I0920 10:44:22.545691    8843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:22.545709    8843 client.go:171] duration metric: took 234.461292ms to LocalClient.Create
	I0920 10:44:24.547868    8843 start.go:128] duration metric: took 2.263324875s to createHost
	I0920 10:44:24.547944    8843 start.go:83] releasing machines lock for "force-systemd-flag-999000", held for 2.263454042s
	W0920 10:44:24.548052    8843 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:24.565198    8843 out.go:177] * Deleting "force-systemd-flag-999000" in qemu2 ...
	W0920 10:44:24.596926    8843 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:24.596943    8843 start.go:729] Will try again in 5 seconds ...
	I0920 10:44:29.599227    8843 start.go:360] acquireMachinesLock for force-systemd-flag-999000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:29.868844    8843 start.go:364] duration metric: took 269.432458ms to acquireMachinesLock for "force-systemd-flag-999000"
	I0920 10:44:29.868952    8843 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-999000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-999000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:44:29.869222    8843 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:44:29.885880    8843 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:44:29.934994    8843 start.go:159] libmachine.API.Create for "force-systemd-flag-999000" (driver="qemu2")
	I0920 10:44:29.935059    8843 client.go:168] LocalClient.Create starting
	I0920 10:44:29.935227    8843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:44:29.935281    8843 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:29.935298    8843 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:29.935358    8843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:44:29.935414    8843 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:29.935433    8843 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:29.936028    8843 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:44:30.140657    8843 main.go:141] libmachine: Creating SSH key...
	I0920 10:44:30.236129    8843 main.go:141] libmachine: Creating Disk image...
	I0920 10:44:30.236135    8843 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:44:30.236315    8843 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2
	I0920 10:44:30.246474    8843 main.go:141] libmachine: STDOUT: 
	I0920 10:44:30.246491    8843 main.go:141] libmachine: STDERR: 
	I0920 10:44:30.246545    8843 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2 +20000M
	I0920 10:44:30.254450    8843 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:44:30.254470    8843 main.go:141] libmachine: STDERR: 
	I0920 10:44:30.254486    8843 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2
	I0920 10:44:30.254492    8843 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:44:30.254502    8843 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:30.254542    8843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:9c:ff:63:69:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-flag-999000/disk.qcow2
	I0920 10:44:30.256178    8843 main.go:141] libmachine: STDOUT: 
	I0920 10:44:30.256200    8843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:30.256212    8843 client.go:171] duration metric: took 321.135833ms to LocalClient.Create
	I0920 10:44:32.258392    8843 start.go:128] duration metric: took 2.389130958s to createHost
	I0920 10:44:32.258476    8843 start.go:83] releasing machines lock for "force-systemd-flag-999000", held for 2.389593292s
	W0920 10:44:32.258749    8843 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-999000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-999000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:32.276202    8843 out.go:201] 
	W0920 10:44:32.282427    8843 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:44:32.282480    8843 out.go:270] * 
	* 
	W0920 10:44:32.285266    8843 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:44:32.294315    8843 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-999000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-999000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-999000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.748292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-999000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-999000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-999000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-20 10:44:32.388546 -0700 PDT m=+748.244651709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-999000 -n force-systemd-flag-999000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-999000 -n force-systemd-flag-999000: exit status 7 (35.144291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-999000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-999000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-999000
--- FAIL: TestForceSystemdFlag (10.34s)

                                                
                                    
x
+
TestForceSystemdEnv (10.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-463000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0920 10:44:18.705398    7279 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0920 10:44:18.705430    7279 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0920 10:44:18.705490    7279 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0920 10:44:18.705518    7279 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/002/docker-machine-driver-hyperkit
I0920 10:44:19.115928    7279 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x104d4ed40 0x104d4ed40 0x104d4ed40 0x104d4ed40 0x104d4ed40 0x104d4ed40 0x104d4ed40] Decompressors:map[bz2:0x1400051fc90 gz:0x1400051fc98 tar:0x1400051fc40 tar.bz2:0x1400051fc50 tar.gz:0x1400051fc60 tar.xz:0x1400051fc70 tar.zst:0x1400051fc80 tbz2:0x1400051fc50 tgz:0x1400051fc60 txz:0x1400051fc70 tzst:0x1400051fc80 xz:0x1400051fca0 zip:0x1400051fcb0 zst:0x1400051fca8] Getters:map[file:0x140058008f0 http:0x14000b01540 https:0x14000b01590] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0920 10:44:19.116066    7279 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/002/docker-machine-driver-hyperkit
I0920 10:44:22.106936    7279 install.go:79] stdout: 
W0920 10:44:22.107141    7279 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0920 10:44:22.107166    7279 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/002/docker-machine-driver-hyperkit]
I0920 10:44:22.121792    7279 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/002/docker-machine-driver-hyperkit]
I0920 10:44:22.133935    7279 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/002/docker-machine-driver-hyperkit]
I0920 10:44:22.142903    7279 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-463000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.992924958s)

                                                
                                                
-- stdout --
	* [force-systemd-env-463000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-463000" primary control-plane node in "force-systemd-env-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:44:17.200940    8823 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:44:17.201075    8823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:17.201078    8823 out.go:358] Setting ErrFile to fd 2...
	I0920 10:44:17.201084    8823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:44:17.201213    8823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:44:17.202300    8823 out.go:352] Setting JSON to false
	I0920 10:44:17.218903    8823 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6228,"bootTime":1726848029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:44:17.218989    8823 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:44:17.225383    8823 out.go:177] * [force-systemd-env-463000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:44:17.233557    8823 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:44:17.233674    8823 notify.go:220] Checking for updates...
	I0920 10:44:17.241513    8823 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:44:17.244572    8823 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:44:17.248517    8823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:44:17.251512    8823 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:44:17.254547    8823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0920 10:44:17.257735    8823 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:44:17.257778    8823 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:44:17.261518    8823 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:44:17.268496    8823 start.go:297] selected driver: qemu2
	I0920 10:44:17.268501    8823 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:44:17.268506    8823 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:44:17.270716    8823 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:44:17.273513    8823 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:44:17.276575    8823 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:44:17.276586    8823 cni.go:84] Creating CNI manager for ""
	I0920 10:44:17.276610    8823 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:44:17.276614    8823 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:44:17.276637    8823 start.go:340] cluster config:
	{Name:force-systemd-env-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:44:17.279973    8823 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:44:17.288478    8823 out.go:177] * Starting "force-systemd-env-463000" primary control-plane node in "force-systemd-env-463000" cluster
	I0920 10:44:17.291388    8823 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:44:17.291400    8823 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:44:17.291405    8823 cache.go:56] Caching tarball of preloaded images
	I0920 10:44:17.291462    8823 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:44:17.291467    8823 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:44:17.291517    8823 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/force-systemd-env-463000/config.json ...
	I0920 10:44:17.291530    8823 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/force-systemd-env-463000/config.json: {Name:mk6fe815a021b64f6575998994bdf3af08fbd202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:44:17.291730    8823 start.go:360] acquireMachinesLock for force-systemd-env-463000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:17.291761    8823 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "force-systemd-env-463000"
	I0920 10:44:17.291773    8823 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:44:17.291797    8823 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:44:17.300462    8823 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:44:17.315916    8823 start.go:159] libmachine.API.Create for "force-systemd-env-463000" (driver="qemu2")
	I0920 10:44:17.315948    8823 client.go:168] LocalClient.Create starting
	I0920 10:44:17.316015    8823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:44:17.316046    8823 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:17.316055    8823 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:17.316095    8823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:44:17.316117    8823 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:17.316126    8823 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:17.316470    8823 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:44:17.474771    8823 main.go:141] libmachine: Creating SSH key...
	I0920 10:44:17.571586    8823 main.go:141] libmachine: Creating Disk image...
	I0920 10:44:17.571599    8823 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:44:17.571808    8823 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2
	I0920 10:44:17.581089    8823 main.go:141] libmachine: STDOUT: 
	I0920 10:44:17.581105    8823 main.go:141] libmachine: STDERR: 
	I0920 10:44:17.581177    8823 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2 +20000M
	I0920 10:44:17.589191    8823 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:44:17.589206    8823 main.go:141] libmachine: STDERR: 
	I0920 10:44:17.589222    8823 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2
	I0920 10:44:17.589225    8823 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:44:17.589237    8823 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:17.589263    8823 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:17:15:5d:bf:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2
	I0920 10:44:17.590848    8823 main.go:141] libmachine: STDOUT: 
	I0920 10:44:17.590862    8823 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:17.590883    8823 client.go:171] duration metric: took 274.929292ms to LocalClient.Create
	I0920 10:44:19.593093    8823 start.go:128] duration metric: took 2.301272417s to createHost
	I0920 10:44:19.593170    8823 start.go:83] releasing machines lock for "force-systemd-env-463000", held for 2.301407708s
	W0920 10:44:19.593269    8823 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:19.610714    8823 out.go:177] * Deleting "force-systemd-env-463000" in qemu2 ...
	W0920 10:44:19.642227    8823 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:19.642264    8823 start.go:729] Will try again in 5 seconds ...
	I0920 10:44:24.644474    8823 start.go:360] acquireMachinesLock for force-systemd-env-463000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:44:24.644939    8823 start.go:364] duration metric: took 363.959µs to acquireMachinesLock for "force-systemd-env-463000"
	I0920 10:44:24.645093    8823 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:44:24.645313    8823 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:44:24.654261    8823 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0920 10:44:24.703468    8823 start.go:159] libmachine.API.Create for "force-systemd-env-463000" (driver="qemu2")
	I0920 10:44:24.703525    8823 client.go:168] LocalClient.Create starting
	I0920 10:44:24.703633    8823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:44:24.703693    8823 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:24.703711    8823 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:24.703776    8823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:44:24.703821    8823 main.go:141] libmachine: Decoding PEM data...
	I0920 10:44:24.703832    8823 main.go:141] libmachine: Parsing certificate...
	I0920 10:44:24.704700    8823 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:44:24.886021    8823 main.go:141] libmachine: Creating SSH key...
	I0920 10:44:25.091388    8823 main.go:141] libmachine: Creating Disk image...
	I0920 10:44:25.091396    8823 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:44:25.091656    8823 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2
	I0920 10:44:25.101442    8823 main.go:141] libmachine: STDOUT: 
	I0920 10:44:25.101463    8823 main.go:141] libmachine: STDERR: 
	I0920 10:44:25.101530    8823 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2 +20000M
	I0920 10:44:25.109394    8823 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:44:25.109437    8823 main.go:141] libmachine: STDERR: 
	I0920 10:44:25.109453    8823 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2
	I0920 10:44:25.109458    8823 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:44:25.109466    8823 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:44:25.109494    8823 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:0c:0c:36:37:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/force-systemd-env-463000/disk.qcow2
	I0920 10:44:25.111049    8823 main.go:141] libmachine: STDOUT: 
	I0920 10:44:25.111063    8823 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:44:25.111084    8823 client.go:171] duration metric: took 407.554834ms to LocalClient.Create
	I0920 10:44:27.113251    8823 start.go:128] duration metric: took 2.467882167s to createHost
	I0920 10:44:27.113316    8823 start.go:83] releasing machines lock for "force-systemd-env-463000", held for 2.46835825s
	W0920 10:44:27.113694    8823 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:44:27.129294    8823 out.go:201] 
	W0920 10:44:27.133378    8823 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:44:27.133403    8823 out.go:270] * 
	* 
	W0920 10:44:27.136124    8823 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:44:27.149310    8823 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-463000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-463000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-463000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.758334ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-463000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-463000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-463000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-20 10:44:27.246335 -0700 PDT m=+743.102421626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-463000 -n force-systemd-env-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-463000 -n force-systemd-env-463000: exit status 7 (34.535375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-463000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-463000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-463000
--- FAIL: TestForceSystemdEnv (10.19s)

                                                
                                    
x
+
TestErrorSpam/setup (9.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-559000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-559000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 --driver=qemu2 : exit status 80 (9.916472167s)

                                                
                                                
-- stdout --
	* [nospam-559000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-559000" primary control-plane node in "nospam-559000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-559000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-559000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-559000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-559000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-559000] minikube v1.34.0 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19679
- KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-559000" primary control-plane node in "nospam-559000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-559000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-559000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.92s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-968000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.816166917s)

                                                
                                                
-- stdout --
	* [functional-968000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-968000" primary control-plane node in "functional-968000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-968000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51092 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51092 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51092 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-968000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-968000] minikube v1.34.0 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19679
- KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-968000" primary control-plane node in "functional-968000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-968000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51092 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51092 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51092 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (69.518792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.89s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 10:33:05.555620    7279 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-968000 --alsologtostderr -v=8: exit status 80 (5.177632333s)

                                                
                                                
-- stdout --
	* [functional-968000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-968000" primary control-plane node in "functional-968000" cluster
	* Restarting existing qemu2 VM for "functional-968000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-968000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:33:05.585848    7500 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:33:05.585973    7500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:33:05.585976    7500 out.go:358] Setting ErrFile to fd 2...
	I0920 10:33:05.585978    7500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:33:05.586105    7500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:33:05.587125    7500 out.go:352] Setting JSON to false
	I0920 10:33:05.603203    7500 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5556,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:33:05.603274    7500 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:33:05.607356    7500 out.go:177] * [functional-968000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:33:05.613054    7500 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:33:05.613107    7500 notify.go:220] Checking for updates...
	I0920 10:33:05.620006    7500 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:33:05.624032    7500 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:33:05.625531    7500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:33:05.628959    7500 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:33:05.632047    7500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:33:05.635328    7500 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:33:05.635383    7500 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:33:05.640005    7500 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:33:05.647018    7500 start.go:297] selected driver: qemu2
	I0920 10:33:05.647027    7500 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:33:05.647097    7500 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:33:05.649213    7500 cni.go:84] Creating CNI manager for ""
	I0920 10:33:05.649255    7500 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:33:05.649302    7500 start.go:340] cluster config:
	{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-968000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:33:05.652610    7500 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:33:05.659979    7500 out.go:177] * Starting "functional-968000" primary control-plane node in "functional-968000" cluster
	I0920 10:33:05.663979    7500 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:33:05.663996    7500 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:33:05.664002    7500 cache.go:56] Caching tarball of preloaded images
	I0920 10:33:05.664053    7500 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:33:05.664058    7500 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:33:05.664115    7500 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/functional-968000/config.json ...
	I0920 10:33:05.664593    7500 start.go:360] acquireMachinesLock for functional-968000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:33:05.664622    7500 start.go:364] duration metric: took 22.167µs to acquireMachinesLock for "functional-968000"
	I0920 10:33:05.664631    7500 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:33:05.664635    7500 fix.go:54] fixHost starting: 
	I0920 10:33:05.664761    7500 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
	W0920 10:33:05.664770    7500 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:33:05.671017    7500 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
	I0920 10:33:05.674977    7500 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:33:05.675013    7500 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:34:9e:53:2f:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/disk.qcow2
	I0920 10:33:05.677001    7500 main.go:141] libmachine: STDOUT: 
	I0920 10:33:05.677018    7500 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:33:05.677051    7500 fix.go:56] duration metric: took 12.413416ms for fixHost
	I0920 10:33:05.677056    7500 start.go:83] releasing machines lock for "functional-968000", held for 12.43025ms
	W0920 10:33:05.677063    7500 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:33:05.677104    7500 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:33:05.677109    7500 start.go:729] Will try again in 5 seconds ...
	I0920 10:33:10.679235    7500 start.go:360] acquireMachinesLock for functional-968000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:33:10.679590    7500 start.go:364] duration metric: took 238.292µs to acquireMachinesLock for "functional-968000"
	I0920 10:33:10.679713    7500 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:33:10.679732    7500 fix.go:54] fixHost starting: 
	I0920 10:33:10.680399    7500 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
	W0920 10:33:10.680425    7500 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:33:10.684779    7500 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
	I0920 10:33:10.688598    7500 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:33:10.688857    7500 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:34:9e:53:2f:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/disk.qcow2
	I0920 10:33:10.697496    7500 main.go:141] libmachine: STDOUT: 
	I0920 10:33:10.697547    7500 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:33:10.697616    7500 fix.go:56] duration metric: took 17.882875ms for fixHost
	I0920 10:33:10.697632    7500 start.go:83] releasing machines lock for "functional-968000", held for 18.014667ms
	W0920 10:33:10.697814    7500 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:33:10.704821    7500 out.go:201] 
	W0920 10:33:10.708786    7500 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:33:10.708815    7500 out.go:270] * 
	* 
	W0920 10:33:10.711509    7500 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:33:10.719695    7500 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-968000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.179464625s for "functional-968000" cluster.
I0920 10:33:10.735255    7279 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (68.03725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.759417ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-968000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (30.805625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-968000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-968000 get po -A: exit status 1 (26.232833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-968000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-968000\n"*: args "kubectl --context functional-968000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-968000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (31.132291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl images: exit status 83 (43.860791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.847833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-968000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.808708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.767416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 kubectl -- --context functional-968000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 kubectl -- --context functional-968000 get pods: exit status 1 (2.1534855s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-968000
	* no server found for cluster "functional-968000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-968000 kubectl -- --context functional-968000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (32.113833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-968000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-968000 get pods: exit status 1 (1.014559125s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-968000
	* no server found for cluster "functional-968000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-968000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (30.802625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.05s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-968000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.189202958s)

                                                
                                                
-- stdout --
	* [functional-968000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-968000" primary control-plane node in "functional-968000" cluster
	* Restarting existing qemu2 VM for "functional-968000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-968000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-968000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.189837875s for "functional-968000" cluster.
I0920 10:33:23.150372    7279 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (69.939291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-968000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-968000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.037583ms)

                                                
                                                
** stderr ** 
	error: context "functional-968000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-968000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (31.078459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 logs: exit status 83 (77.463875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | -p download-only-134000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| delete  | -p download-only-134000                                                  | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| start   | -o=json --download-only                                                  | download-only-709000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | -p download-only-709000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| delete  | -p download-only-709000                                                  | download-only-709000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| delete  | -p download-only-134000                                                  | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| delete  | -p download-only-709000                                                  | download-only-709000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| start   | --download-only -p                                                       | binary-mirror-534000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | binary-mirror-534000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51059                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-534000                                                  | binary-mirror-534000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| addons  | enable dashboard -p                                                      | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | addons-927000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | addons-927000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-927000 --wait=true                                             | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-927000                                                         | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| start   | -p nospam-559000 -n=1 --memory=2250 --wait=false                         | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-559000                                                         | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
	|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
	| cache   | functional-968000 cache delete                                           | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
	|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
	| ssh     | functional-968000 ssh sudo                                               | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-968000                                                        | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-968000 cache reload                                           | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
	| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-968000 kubectl --                                             | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
	|         | --context functional-968000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 10:33:17
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:33:17.988775    7574 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:33:17.988912    7574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:33:17.988914    7574 out.go:358] Setting ErrFile to fd 2...
	I0920 10:33:17.988916    7574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:33:17.989032    7574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:33:17.990076    7574 out.go:352] Setting JSON to false
	I0920 10:33:18.006242    7574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5568,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:33:18.006302    7574 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:33:18.014303    7574 out.go:177] * [functional-968000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:33:18.023291    7574 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:33:18.023326    7574 notify.go:220] Checking for updates...
	I0920 10:33:18.033237    7574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:33:18.036290    7574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:33:18.039288    7574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:33:18.042292    7574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:33:18.045269    7574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:33:18.048607    7574 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:33:18.048660    7574 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:33:18.053234    7574 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:33:18.060279    7574 start.go:297] selected driver: qemu2
	I0920 10:33:18.060282    7574 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:33:18.060340    7574 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:33:18.062691    7574 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:33:18.062714    7574 cni.go:84] Creating CNI manager for ""
	I0920 10:33:18.062751    7574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:33:18.062796    7574 start.go:340] cluster config:
	{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-968000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:33:18.066423    7574 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:33:18.074276    7574 out.go:177] * Starting "functional-968000" primary control-plane node in "functional-968000" cluster
	I0920 10:33:18.078315    7574 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:33:18.078329    7574 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:33:18.078337    7574 cache.go:56] Caching tarball of preloaded images
	I0920 10:33:18.078406    7574 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:33:18.078415    7574 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:33:18.078470    7574 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/functional-968000/config.json ...
	I0920 10:33:18.078983    7574 start.go:360] acquireMachinesLock for functional-968000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:33:18.079018    7574 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "functional-968000"
	I0920 10:33:18.079026    7574 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:33:18.079029    7574 fix.go:54] fixHost starting: 
	I0920 10:33:18.079153    7574 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
	W0920 10:33:18.079160    7574 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:33:18.086254    7574 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
	I0920 10:33:18.090222    7574 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:33:18.090256    7574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:34:9e:53:2f:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/disk.qcow2
	I0920 10:33:18.092292    7574 main.go:141] libmachine: STDOUT: 
	I0920 10:33:18.092305    7574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:33:18.092334    7574 fix.go:56] duration metric: took 13.302541ms for fixHost
	I0920 10:33:18.092338    7574 start.go:83] releasing machines lock for "functional-968000", held for 13.317583ms
	W0920 10:33:18.092344    7574 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:33:18.092369    7574 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:33:18.092374    7574 start.go:729] Will try again in 5 seconds ...
	I0920 10:33:23.094522    7574 start.go:360] acquireMachinesLock for functional-968000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:33:23.094931    7574 start.go:364] duration metric: took 355.167µs to acquireMachinesLock for "functional-968000"
	I0920 10:33:23.095075    7574 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:33:23.095088    7574 fix.go:54] fixHost starting: 
	I0920 10:33:23.095797    7574 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
	W0920 10:33:23.095813    7574 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:33:23.100245    7574 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
	I0920 10:33:23.104210    7574 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:33:23.104428    7574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:34:9e:53:2f:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/disk.qcow2
	I0920 10:33:23.113285    7574 main.go:141] libmachine: STDOUT: 
	I0920 10:33:23.113329    7574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:33:23.113395    7574 fix.go:56] duration metric: took 18.311042ms for fixHost
	I0920 10:33:23.113409    7574 start.go:83] releasing machines lock for "functional-968000", held for 18.446ms
	W0920 10:33:23.113590    7574 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:33:23.121144    7574 out.go:201] 
	W0920 10:33:23.125234    7574 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:33:23.125274    7574 out.go:270] * 
	W0920 10:33:23.128459    7574 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:33:23.136190    7574 out.go:201] 
	
	
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-968000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | -p download-only-134000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| delete  | -p download-only-134000                                                  | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| start   | -o=json --download-only                                                  | download-only-709000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | -p download-only-709000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| delete  | -p download-only-709000                                                  | download-only-709000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| delete  | -p download-only-134000                                                  | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| delete  | -p download-only-709000                                                  | download-only-709000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| start   | --download-only -p                                                       | binary-mirror-534000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | binary-mirror-534000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51059                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-534000                                                  | binary-mirror-534000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| addons  | enable dashboard -p                                                      | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | addons-927000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | addons-927000                                                            |                      |         |         |                     |                     |
| start   | -p addons-927000 --wait=true                                             | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-927000                                                         | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| start   | -p nospam-559000 -n=1 --memory=2250 --wait=false                         | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-559000                                                         | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
| cache   | functional-968000 cache delete                                           | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
| ssh     | functional-968000 ssh sudo                                               | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-968000                                                        | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-968000 cache reload                                           | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-968000 kubectl --                                             | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | --context functional-968000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/09/20 10:33:17
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0920 10:33:17.988775    7574 out.go:345] Setting OutFile to fd 1 ...
I0920 10:33:17.988912    7574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:33:17.988914    7574 out.go:358] Setting ErrFile to fd 2...
I0920 10:33:17.988916    7574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:33:17.989032    7574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:33:17.990076    7574 out.go:352] Setting JSON to false
I0920 10:33:18.006242    7574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5568,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0920 10:33:18.006302    7574 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0920 10:33:18.014303    7574 out.go:177] * [functional-968000] minikube v1.34.0 on Darwin 14.5 (arm64)
I0920 10:33:18.023291    7574 out.go:177]   - MINIKUBE_LOCATION=19679
I0920 10:33:18.023326    7574 notify.go:220] Checking for updates...
I0920 10:33:18.033237    7574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
I0920 10:33:18.036290    7574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0920 10:33:18.039288    7574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0920 10:33:18.042292    7574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
I0920 10:33:18.045269    7574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0920 10:33:18.048607    7574 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:33:18.048660    7574 driver.go:394] Setting default libvirt URI to qemu:///system
I0920 10:33:18.053234    7574 out.go:177] * Using the qemu2 driver based on existing profile
I0920 10:33:18.060279    7574 start.go:297] selected driver: qemu2
I0920 10:33:18.060282    7574 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 10:33:18.060340    7574 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0920 10:33:18.062691    7574 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 10:33:18.062714    7574 cni.go:84] Creating CNI manager for ""
I0920 10:33:18.062751    7574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 10:33:18.062796    7574 start.go:340] cluster config:
{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-968000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 10:33:18.066423    7574 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 10:33:18.074276    7574 out.go:177] * Starting "functional-968000" primary control-plane node in "functional-968000" cluster
I0920 10:33:18.078315    7574 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 10:33:18.078329    7574 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0920 10:33:18.078337    7574 cache.go:56] Caching tarball of preloaded images
I0920 10:33:18.078406    7574 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0920 10:33:18.078415    7574 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0920 10:33:18.078470    7574 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/functional-968000/config.json ...
I0920 10:33:18.078983    7574 start.go:360] acquireMachinesLock for functional-968000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 10:33:18.079018    7574 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "functional-968000"
I0920 10:33:18.079026    7574 start.go:96] Skipping create...Using existing machine configuration
I0920 10:33:18.079029    7574 fix.go:54] fixHost starting: 
I0920 10:33:18.079153    7574 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
W0920 10:33:18.079160    7574 fix.go:138] unexpected machine state, will restart: <nil>
I0920 10:33:18.086254    7574 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
I0920 10:33:18.090222    7574 qemu.go:418] Using hvf for hardware acceleration
I0920 10:33:18.090256    7574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:34:9e:53:2f:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/disk.qcow2
I0920 10:33:18.092292    7574 main.go:141] libmachine: STDOUT: 
I0920 10:33:18.092305    7574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0920 10:33:18.092334    7574 fix.go:56] duration metric: took 13.302541ms for fixHost
I0920 10:33:18.092338    7574 start.go:83] releasing machines lock for "functional-968000", held for 13.317583ms
W0920 10:33:18.092344    7574 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0920 10:33:18.092369    7574 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0920 10:33:18.092374    7574 start.go:729] Will try again in 5 seconds ...
I0920 10:33:23.094522    7574 start.go:360] acquireMachinesLock for functional-968000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 10:33:23.094931    7574 start.go:364] duration metric: took 355.167µs to acquireMachinesLock for "functional-968000"
I0920 10:33:23.095075    7574 start.go:96] Skipping create...Using existing machine configuration
I0920 10:33:23.095088    7574 fix.go:54] fixHost starting: 
I0920 10:33:23.095797    7574 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
W0920 10:33:23.095813    7574 fix.go:138] unexpected machine state, will restart: <nil>
I0920 10:33:23.100245    7574 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
I0920 10:33:23.104210    7574 qemu.go:418] Using hvf for hardware acceleration
I0920 10:33:23.104428    7574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:34:9e:53:2f:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/disk.qcow2
I0920 10:33:23.113285    7574 main.go:141] libmachine: STDOUT: 
I0920 10:33:23.113329    7574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0920 10:33:23.113395    7574 fix.go:56] duration metric: took 18.311042ms for fixHost
I0920 10:33:23.113409    7574 start.go:83] releasing machines lock for "functional-968000", held for 18.446ms
W0920 10:33:23.113590    7574 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0920 10:33:23.121144    7574 out.go:201] 
W0920 10:33:23.125234    7574 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0920 10:33:23.125274    7574 out.go:270] * 
W0920 10:33:23.128459    7574 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0920 10:33:23.136190    7574 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3736109083/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | -p download-only-134000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| delete  | -p download-only-134000                                                  | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| start   | -o=json --download-only                                                  | download-only-709000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | -p download-only-709000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| delete  | -p download-only-709000                                                  | download-only-709000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| delete  | -p download-only-134000                                                  | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| delete  | -p download-only-709000                                                  | download-only-709000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| start   | --download-only -p                                                       | binary-mirror-534000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | binary-mirror-534000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51059                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-534000                                                  | binary-mirror-534000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| addons  | enable dashboard -p                                                      | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | addons-927000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | addons-927000                                                            |                      |         |         |                     |                     |
| start   | -p addons-927000 --wait=true                                             | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-927000                                                         | addons-927000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| start   | -p nospam-559000 -n=1 --memory=2250 --wait=false                         | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-559000 --log_dir                                                  | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-559000                                                         | nospam-559000        | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
| cache   | functional-968000 cache delete                                           | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
| ssh     | functional-968000 ssh sudo                                               | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-968000                                                        | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-968000 cache reload                                           | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT | 20 Sep 24 10:33 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-968000 kubectl --                                             | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | --context functional-968000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.34.0 | 20 Sep 24 10:33 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/09/20 10:33:17
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0920 10:33:17.988775    7574 out.go:345] Setting OutFile to fd 1 ...
I0920 10:33:17.988912    7574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:33:17.988914    7574 out.go:358] Setting ErrFile to fd 2...
I0920 10:33:17.988916    7574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:33:17.989032    7574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:33:17.990076    7574 out.go:352] Setting JSON to false
I0920 10:33:18.006242    7574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5568,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0920 10:33:18.006302    7574 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0920 10:33:18.014303    7574 out.go:177] * [functional-968000] minikube v1.34.0 on Darwin 14.5 (arm64)
I0920 10:33:18.023291    7574 out.go:177]   - MINIKUBE_LOCATION=19679
I0920 10:33:18.023326    7574 notify.go:220] Checking for updates...
I0920 10:33:18.033237    7574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
I0920 10:33:18.036290    7574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0920 10:33:18.039288    7574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0920 10:33:18.042292    7574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
I0920 10:33:18.045269    7574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0920 10:33:18.048607    7574 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:33:18.048660    7574 driver.go:394] Setting default libvirt URI to qemu:///system
I0920 10:33:18.053234    7574 out.go:177] * Using the qemu2 driver based on existing profile
I0920 10:33:18.060279    7574 start.go:297] selected driver: qemu2
I0920 10:33:18.060282    7574 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 10:33:18.060340    7574 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0920 10:33:18.062691    7574 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 10:33:18.062714    7574 cni.go:84] Creating CNI manager for ""
I0920 10:33:18.062751    7574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 10:33:18.062796    7574 start.go:340] cluster config:
{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-968000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 10:33:18.066423    7574 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 10:33:18.074276    7574 out.go:177] * Starting "functional-968000" primary control-plane node in "functional-968000" cluster
I0920 10:33:18.078315    7574 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 10:33:18.078329    7574 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0920 10:33:18.078337    7574 cache.go:56] Caching tarball of preloaded images
I0920 10:33:18.078406    7574 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0920 10:33:18.078415    7574 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0920 10:33:18.078470    7574 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/functional-968000/config.json ...
I0920 10:33:18.078983    7574 start.go:360] acquireMachinesLock for functional-968000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 10:33:18.079018    7574 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "functional-968000"
I0920 10:33:18.079026    7574 start.go:96] Skipping create...Using existing machine configuration
I0920 10:33:18.079029    7574 fix.go:54] fixHost starting: 
I0920 10:33:18.079153    7574 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
W0920 10:33:18.079160    7574 fix.go:138] unexpected machine state, will restart: <nil>
I0920 10:33:18.086254    7574 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
I0920 10:33:18.090222    7574 qemu.go:418] Using hvf for hardware acceleration
I0920 10:33:18.090256    7574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:34:9e:53:2f:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/disk.qcow2
I0920 10:33:18.092292    7574 main.go:141] libmachine: STDOUT: 
I0920 10:33:18.092305    7574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0920 10:33:18.092334    7574 fix.go:56] duration metric: took 13.302541ms for fixHost
I0920 10:33:18.092338    7574 start.go:83] releasing machines lock for "functional-968000", held for 13.317583ms
W0920 10:33:18.092344    7574 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0920 10:33:18.092369    7574 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0920 10:33:18.092374    7574 start.go:729] Will try again in 5 seconds ...
I0920 10:33:23.094522    7574 start.go:360] acquireMachinesLock for functional-968000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 10:33:23.094931    7574 start.go:364] duration metric: took 355.167µs to acquireMachinesLock for "functional-968000"
I0920 10:33:23.095075    7574 start.go:96] Skipping create...Using existing machine configuration
I0920 10:33:23.095088    7574 fix.go:54] fixHost starting: 
I0920 10:33:23.095797    7574 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
W0920 10:33:23.095813    7574 fix.go:138] unexpected machine state, will restart: <nil>
I0920 10:33:23.100245    7574 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
I0920 10:33:23.104210    7574 qemu.go:418] Using hvf for hardware acceleration
I0920 10:33:23.104428    7574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:34:9e:53:2f:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/functional-968000/disk.qcow2
I0920 10:33:23.113285    7574 main.go:141] libmachine: STDOUT: 
I0920 10:33:23.113329    7574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0920 10:33:23.113395    7574 fix.go:56] duration metric: took 18.311042ms for fixHost
I0920 10:33:23.113409    7574 start.go:83] releasing machines lock for "functional-968000", held for 18.446ms
W0920 10:33:23.113590    7574 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0920 10:33:23.121144    7574 out.go:201] 
W0920 10:33:23.125234    7574 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0920 10:33:23.125274    7574 out.go:270] * 
W0920 10:33:23.128459    7574 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0920 10:33:23.136190    7574 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-968000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-968000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.057333ms)

                                                
                                                
** stderr ** 
	error: context "functional-968000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-968000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-968000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-968000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-968000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-968000 --alsologtostderr -v=1] stderr:
I0920 10:34:04.265271    7885 out.go:345] Setting OutFile to fd 1 ...
I0920 10:34:04.265685    7885 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.265692    7885 out.go:358] Setting ErrFile to fd 2...
I0920 10:34:04.265694    7885 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.265835    7885 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:34:04.266078    7885 mustload.go:65] Loading cluster: functional-968000
I0920 10:34:04.266293    7885 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:34:04.270588    7885 out.go:177] * The control-plane node functional-968000 host is not running: state=Stopped
I0920 10:34:04.274509    7885 out.go:177]   To start a cluster, run: "minikube start -p functional-968000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (42.678125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 status: exit status 7 (30.413708ms)

                                                
                                                
-- stdout --
	functional-968000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-968000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (30.674042ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-968000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 status -o json: exit status 7 (30.687292ms)

                                                
                                                
-- stdout --
	{"Name":"functional-968000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-968000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (30.630333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-968000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-968000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.800416ms)

                                                
                                                
** stderr ** 
	error: context "functional-968000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-968000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-968000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-968000 describe po hello-node-connect: exit status 1 (25.475833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-968000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-968000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-968000 logs -l app=hello-node-connect: exit status 1 (25.755625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-968000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-968000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-968000 describe svc hello-node-connect: exit status 1 (25.565125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-968000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (30.44425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-968000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (30.845083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "echo hello": exit status 83 (49.48025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n"*. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "cat /etc/hostname": exit status 83 (42.989208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-968000"- but got *"* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n"*. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (31.182459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (51.41425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.963ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-968000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-968000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cp functional-968000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2156174527/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 cp functional-968000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2156174527/001/cp-test.txt: exit status 83 (41.515167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 cp functional-968000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2156174527/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /home/docker/cp-test.txt": exit status 83 (39.799291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2156174527/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (48.925417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (41.859166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-968000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-968000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7279/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/test/nested/copy/7279/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/test/nested/copy/7279/hosts": exit status 83 (40.545625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/test/nested/copy/7279/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-968000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-968000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (30.389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7279.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/7279.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/7279.pem": exit status 83 (42.27575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/7279.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /etc/ssl/certs/7279.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7279.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7279.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /usr/share/ca-certificates/7279.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /usr/share/ca-certificates/7279.pem": exit status 83 (38.728542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/7279.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /usr/share/ca-certificates/7279.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7279.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (42.696292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/72792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/72792.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/72792.pem": exit status 83 (49.661ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/72792.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /etc/ssl/certs/72792.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/72792.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/72792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /usr/share/ca-certificates/72792.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /usr/share/ca-certificates/72792.pem": exit status 83 (41.797459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/72792.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /usr/share/ca-certificates/72792.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/72792.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (40.667083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (29.027125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-968000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-968000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.686042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-968000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (29.886375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo systemctl is-active crio": exit status 83 (39.040792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 version -o=json --components: exit status 83 (42.895625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-968000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-968000 image ls --format short --alsologtostderr:
I0920 10:34:04.670542    7900 out.go:345] Setting OutFile to fd 1 ...
I0920 10:34:04.670708    7900 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.670711    7900 out.go:358] Setting ErrFile to fd 2...
I0920 10:34:04.670713    7900 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.670851    7900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:34:04.671276    7900 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:34:04.671342    7900 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-968000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-968000 image ls --format table --alsologtostderr:
I0920 10:34:04.891919    7912 out.go:345] Setting OutFile to fd 1 ...
I0920 10:34:04.892071    7912 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.892074    7912 out.go:358] Setting ErrFile to fd 2...
I0920 10:34:04.892076    7912 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.892226    7912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:34:04.892657    7912 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:34:04.892725    7912 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
I0920 10:34:13.587089    7279 retry.go:31] will retry after 33.59007984s: Temporary Error: Get "http:": http: no Host in request URL
I0920 10:34:47.179132    7279 retry.go:31] will retry after 26.653165856s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-968000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-968000 image ls --format json --alsologtostderr:
I0920 10:34:04.855687    7910 out.go:345] Setting OutFile to fd 1 ...
I0920 10:34:04.855848    7910 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.855852    7910 out.go:358] Setting ErrFile to fd 2...
I0920 10:34:04.855854    7910 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.855979    7910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:34:04.856434    7910 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:34:04.856496    7910 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-968000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-968000 image ls --format yaml --alsologtostderr:
I0920 10:34:04.706046    7902 out.go:345] Setting OutFile to fd 1 ...
I0920 10:34:04.706208    7902 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.706212    7902 out.go:358] Setting ErrFile to fd 2...
I0920 10:34:04.706214    7902 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.706354    7902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:34:04.706860    7902 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:34:04.706924    7902 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh pgrep buildkitd: exit status 83 (40.742542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image build -t localhost/my-image:functional-968000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-968000 image build -t localhost/my-image:functional-968000 testdata/build --alsologtostderr:
I0920 10:34:04.782213    7906 out.go:345] Setting OutFile to fd 1 ...
I0920 10:34:04.782671    7906 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.782675    7906 out.go:358] Setting ErrFile to fd 2...
I0920 10:34:04.782677    7906 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:34:04.782847    7906 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:34:04.783230    7906 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:34:04.783678    7906 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:34:04.783955    7906 build_images.go:133] succeeded building to: 
I0920 10:34:04.783958    7906 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
functional_test.go:446: expected "localhost/my-image:functional-968000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-968000 docker-env) && out/minikube-darwin-arm64 status -p functional-968000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-968000 docker-env) && out/minikube-darwin-arm64 status -p functional-968000": exit status 1 (44.768041ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2: exit status 83 (41.812458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:34:04.543735    7894 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:34:04.544331    7894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:04.544334    7894 out.go:358] Setting ErrFile to fd 2...
	I0920 10:34:04.544337    7894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:04.544473    7894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:34:04.544689    7894 mustload.go:65] Loading cluster: functional-968000
	I0920 10:34:04.544912    7894 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:34:04.548014    7894 out.go:177] * The control-plane node functional-968000 host is not running: state=Stopped
	I0920 10:34:04.551982    7894 out.go:177]   To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2: exit status 83 (42.557334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:34:04.628374    7898 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:34:04.628507    7898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:04.628510    7898 out.go:358] Setting ErrFile to fd 2...
	I0920 10:34:04.628512    7898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:04.628651    7898 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:34:04.628887    7898 mustload.go:65] Loading cluster: functional-968000
	I0920 10:34:04.629090    7898 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:34:04.632973    7898 out.go:177] * The control-plane node functional-968000 host is not running: state=Stopped
	I0920 10:34:04.637004    7898 out.go:177]   To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2: exit status 83 (41.597333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:34:04.586231    7896 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:34:04.586382    7896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:04.586385    7896 out.go:358] Setting ErrFile to fd 2...
	I0920 10:34:04.586387    7896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:04.586507    7896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:34:04.586738    7896 mustload.go:65] Loading cluster: functional-968000
	I0920 10:34:04.586937    7896 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:34:04.590073    7896 out.go:177] * The control-plane node functional-968000 host is not running: state=Stopped
	I0920 10:34:04.594014    7896 out.go:177]   To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-968000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-968000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.684334ms)

                                                
                                                
** stderr ** 
	error: context "functional-968000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-968000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 service list: exit status 83 (41.958542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-968000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 service list -o json: exit status 83 (43.958875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-968000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 service --namespace=default --https --url hello-node: exit status 83 (44.8325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-968000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 service hello-node --url --format={{.IP}}: exit status 83 (44.831542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-968000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 service hello-node --url: exit status 83 (43.854ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-968000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:1569: failed to parse "* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"": parse "* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0920 10:33:24.943009    7691 out.go:345] Setting OutFile to fd 1 ...
I0920 10:33:24.943191    7691 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:33:24.943194    7691 out.go:358] Setting ErrFile to fd 2...
I0920 10:33:24.943196    7691 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:33:24.943328    7691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:33:24.943557    7691 mustload.go:65] Loading cluster: functional-968000
I0920 10:33:24.943776    7691 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:33:24.948347    7691 out.go:177] * The control-plane node functional-968000 host is not running: state=Stopped
I0920 10:33:24.959349    7691 out.go:177]   To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
stdout: * The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7692: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-968000": client config: context "functional-968000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (108.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0920 10:33:25.009564    7279 retry.go:31] will retry after 3.153776941s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-968000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-968000 get svc nginx-svc: exit status 1 (69.11075ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-968000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (108.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image load --daemon kicbase/echo-server:functional-968000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-968000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image load --daemon kicbase/echo-server:functional-968000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-968000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-968000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image load --daemon kicbase/echo-server:functional-968000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-968000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image save kicbase/echo-server:functional-968000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-968000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0920 10:35:13.950170    7279 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.03570325s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0920 10:35:39.086485    7279 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:35:49.089624    7279 retry.go:31] will retry after 2.69109269s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0920 10:36:01.785680    7279 retry.go:31] will retry after 5.864955351s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:64687->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-279000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-279000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.813239166s)

                                                
                                                
-- stdout --
	* [ha-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-279000" primary control-plane node in "ha-279000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-279000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:36:09.489521    7951 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:36:09.489656    7951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:09.489659    7951 out.go:358] Setting ErrFile to fd 2...
	I0920 10:36:09.489662    7951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:36:09.489790    7951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:36:09.490892    7951 out.go:352] Setting JSON to false
	I0920 10:36:09.506801    7951 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5740,"bootTime":1726848029,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:36:09.506873    7951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:36:09.513509    7951 out.go:177] * [ha-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:36:09.519335    7951 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:36:09.519385    7951 notify.go:220] Checking for updates...
	I0920 10:36:09.525251    7951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:36:09.528328    7951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:36:09.531317    7951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:36:09.534305    7951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:36:09.537384    7951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:36:09.540552    7951 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:36:09.543313    7951 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:36:09.550310    7951 start.go:297] selected driver: qemu2
	I0920 10:36:09.550316    7951 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:36:09.550322    7951 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:36:09.552556    7951 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:36:09.553992    7951 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:36:09.557471    7951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:36:09.557500    7951 cni.go:84] Creating CNI manager for ""
	I0920 10:36:09.557520    7951 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 10:36:09.557524    7951 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 10:36:09.557572    7951 start.go:340] cluster config:
	{Name:ha-279000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-279000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:36:09.561331    7951 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:36:09.568283    7951 out.go:177] * Starting "ha-279000" primary control-plane node in "ha-279000" cluster
	I0920 10:36:09.572306    7951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:36:09.572320    7951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:36:09.572327    7951 cache.go:56] Caching tarball of preloaded images
	I0920 10:36:09.572385    7951 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:36:09.572391    7951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:36:09.572589    7951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/ha-279000/config.json ...
	I0920 10:36:09.572601    7951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/ha-279000/config.json: {Name:mk03d3e621bbd4ee8c1422e228f8be95406174de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:36:09.572804    7951 start.go:360] acquireMachinesLock for ha-279000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:09.572840    7951 start.go:364] duration metric: took 29.208µs to acquireMachinesLock for "ha-279000"
	I0920 10:36:09.572852    7951 start.go:93] Provisioning new machine with config: &{Name:ha-279000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-279000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:09.572878    7951 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:09.581291    7951 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:36:09.599020    7951 start.go:159] libmachine.API.Create for "ha-279000" (driver="qemu2")
	I0920 10:36:09.599049    7951 client.go:168] LocalClient.Create starting
	I0920 10:36:09.599107    7951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:36:09.599141    7951 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:09.599149    7951 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:09.599187    7951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:36:09.599212    7951 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:09.599221    7951 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:09.599586    7951 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:36:09.766453    7951 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:09.824931    7951 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:09.824937    7951 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:09.825131    7951 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2
	I0920 10:36:09.834303    7951 main.go:141] libmachine: STDOUT: 
	I0920 10:36:09.834319    7951 main.go:141] libmachine: STDERR: 
	I0920 10:36:09.834386    7951 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2 +20000M
	I0920 10:36:09.842227    7951 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:09.842252    7951 main.go:141] libmachine: STDERR: 
	I0920 10:36:09.842272    7951 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2
	I0920 10:36:09.842278    7951 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:09.842290    7951 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:09.842315    7951 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:be:13:8a:8b:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2
	I0920 10:36:09.843914    7951 main.go:141] libmachine: STDOUT: 
	I0920 10:36:09.843926    7951 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:09.843947    7951 client.go:171] duration metric: took 244.88775ms to LocalClient.Create
	I0920 10:36:11.846127    7951 start.go:128] duration metric: took 2.27320525s to createHost
	I0920 10:36:11.846213    7951 start.go:83] releasing machines lock for "ha-279000", held for 2.273338375s
	W0920 10:36:11.846311    7951 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:11.863559    7951 out.go:177] * Deleting "ha-279000" in qemu2 ...
	W0920 10:36:11.896742    7951 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:11.896766    7951 start.go:729] Will try again in 5 seconds ...
	I0920 10:36:16.898679    7951 start.go:360] acquireMachinesLock for ha-279000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:36:16.899132    7951 start.go:364] duration metric: took 328.292µs to acquireMachinesLock for "ha-279000"
	I0920 10:36:16.899240    7951 start.go:93] Provisioning new machine with config: &{Name:ha-279000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-279000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:36:16.899522    7951 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:36:16.918269    7951 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:36:16.969945    7951 start.go:159] libmachine.API.Create for "ha-279000" (driver="qemu2")
	I0920 10:36:16.969987    7951 client.go:168] LocalClient.Create starting
	I0920 10:36:16.970104    7951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:36:16.970169    7951 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:16.970190    7951 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:16.970247    7951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:36:16.970290    7951 main.go:141] libmachine: Decoding PEM data...
	I0920 10:36:16.970305    7951 main.go:141] libmachine: Parsing certificate...
	I0920 10:36:16.970831    7951 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:36:17.146473    7951 main.go:141] libmachine: Creating SSH key...
	I0920 10:36:17.207099    7951 main.go:141] libmachine: Creating Disk image...
	I0920 10:36:17.207105    7951 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:36:17.207297    7951 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2
	I0920 10:36:17.216391    7951 main.go:141] libmachine: STDOUT: 
	I0920 10:36:17.216412    7951 main.go:141] libmachine: STDERR: 
	I0920 10:36:17.216462    7951 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2 +20000M
	I0920 10:36:17.224305    7951 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:36:17.224326    7951 main.go:141] libmachine: STDERR: 
	I0920 10:36:17.224350    7951 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2
	I0920 10:36:17.224355    7951 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:36:17.224359    7951 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:36:17.224385    7951 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:fe:37:ad:cb:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2
	I0920 10:36:17.225975    7951 main.go:141] libmachine: STDOUT: 
	I0920 10:36:17.225990    7951 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:36:17.226003    7951 client.go:171] duration metric: took 256.008125ms to LocalClient.Create
	I0920 10:36:19.228311    7951 start.go:128] duration metric: took 2.328728541s to createHost
	I0920 10:36:19.228404    7951 start.go:83] releasing machines lock for "ha-279000", held for 2.32923625s
	W0920 10:36:19.228727    7951 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-279000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-279000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:36:19.243433    7951 out.go:201] 
	W0920 10:36:19.247672    7951 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:36:19.247705    7951 out.go:270] * 
	* 
	W0920 10:36:19.250329    7951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:36:19.259413    7951 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-279000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (69.003458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (108.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.868834ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-279000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- rollout status deployment/busybox: exit status 1 (58.086458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.976958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:36:19.522121    7279 retry.go:31] will retry after 537.433748ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.021ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:36:20.165901    7279 retry.go:31] will retry after 1.484130586s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.027958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:36:21.757484    7279 retry.go:31] will retry after 1.437530651s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.824584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:36:23.301236    7279 retry.go:31] will retry after 4.209441373s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.724166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:36:27.619842    7279 retry.go:31] will retry after 5.743217876s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.147083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:36:33.469554    7279 retry.go:31] will retry after 5.026648476s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.035584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:36:38.603538    7279 retry.go:31] will retry after 14.93177574s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.4675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:36:53.641303    7279 retry.go:31] will retry after 9.557256495s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.642083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:37:03.305560    7279 retry.go:31] will retry after 31.934170724s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.207542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:37:35.345317    7279 retry.go:31] will retry after 32.055802579s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.766ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.064292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.214792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.578834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.032083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (30.022375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (108.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-279000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.178833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-279000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (30.751459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-279000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-279000 -v=7 --alsologtostderr: exit status 83 (45.148167ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-279000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-279000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:07.888173    8037 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:07.888628    8037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:07.888632    8037 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:07.888634    8037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:07.888752    8037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:07.888967    8037 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:07.889169    8037 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:07.892412    8037 out.go:177] * The control-plane node ha-279000 host is not running: state=Stopped
	I0920 10:38:07.899371    8037 out.go:177]   To start a cluster, run: "minikube start -p ha-279000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-279000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (31.005834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-279000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-279000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.316666ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-279000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-279000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-279000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (30.150917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-279000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-279000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-279000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-279000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-279000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-279000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-279000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-279000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (30.171334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status --output json -v=7 --alsologtostderr: exit status 7 (30.003625ms)

                                                
                                                
-- stdout --
	{"Name":"ha-279000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:08.098655    8051 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:08.098812    8051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:08.098816    8051 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:08.098818    8051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:08.098930    8051 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:08.099043    8051 out.go:352] Setting JSON to true
	I0920 10:38:08.099055    8051 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:08.099109    8051 notify.go:220] Checking for updates...
	I0920 10:38:08.099247    8051 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:08.099256    8051 status.go:174] checking status of ha-279000 ...
	I0920 10:38:08.099518    8051 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:38:08.099523    8051 status.go:377] host is not running, skipping remaining checks
	I0920 10:38:08.099525    8051 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-279000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (30.82925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 node stop m02 -v=7 --alsologtostderr: exit status 85 (45.847625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:08.160518    8055 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:08.161094    8055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:08.161097    8055 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:08.161100    8055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:08.161281    8055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:08.161540    8055 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:08.161750    8055 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:08.164980    8055 out.go:201] 
	W0920 10:38:08.167991    8055 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0920 10:38:08.167997    8055 out.go:270] * 
	* 
	W0920 10:38:08.169940    8055 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:38:08.173988    8055 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-279000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (30.390666ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:08.206491    8057 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:08.206643    8057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:08.206646    8057 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:08.206648    8057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:08.206765    8057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:08.206899    8057 out.go:352] Setting JSON to false
	I0920 10:38:08.206909    8057 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:08.206975    8057 notify.go:220] Checking for updates...
	I0920 10:38:08.207138    8057 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:08.207151    8057 status.go:174] checking status of ha-279000 ...
	I0920 10:38:08.207395    8057 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:38:08.207399    8057 status.go:377] host is not running, skipping remaining checks
	I0920 10:38:08.207401    8057 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr": ha-279000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr": ha-279000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr": ha-279000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr": ha-279000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (30.431667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-279000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-279000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-279000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-279000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (30.449041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.055958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:08.346491    8066 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:08.346891    8066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:08.346895    8066 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:08.346898    8066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:08.347050    8066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:08.347321    8066 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:08.347505    8066 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:08.351994    8066 out.go:201] 
	W0920 10:38:08.356032    8066 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0920 10:38:08.356037    8066 out.go:270] * 
	* 
	W0920 10:38:08.358032    8066 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:38:08.360984    8066 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0920 10:38:08.346491    8066 out.go:345] Setting OutFile to fd 1 ...
I0920 10:38:08.346891    8066 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:38:08.346895    8066 out.go:358] Setting ErrFile to fd 2...
I0920 10:38:08.346898    8066 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:38:08.347050    8066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:38:08.347321    8066 mustload.go:65] Loading cluster: ha-279000
I0920 10:38:08.347505    8066 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:38:08.351994    8066 out.go:201] 
W0920 10:38:08.356032    8066 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0920 10:38:08.356037    8066 out.go:270] * 
* 
W0920 10:38:08.358032    8066 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0920 10:38:08.360984    8066 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-279000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (30.972125ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:08.395149    8068 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:08.395287    8068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:08.395290    8068 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:08.395293    8068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:08.395429    8068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:08.395550    8068 out.go:352] Setting JSON to false
	I0920 10:38:08.395561    8068 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:08.395611    8068 notify.go:220] Checking for updates...
	I0920 10:38:08.395782    8068 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:08.395791    8068 status.go:174] checking status of ha-279000 ...
	I0920 10:38:08.396019    8068 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:38:08.396022    8068 status.go:377] host is not running, skipping remaining checks
	I0920 10:38:08.396025    8068 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:38:08.396914    7279 retry.go:31] will retry after 929.731524ms: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (73.572208ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:09.400434    8070 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:09.400610    8070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:09.400615    8070 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:09.400618    8070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:09.400788    8070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:09.400942    8070 out.go:352] Setting JSON to false
	I0920 10:38:09.400956    8070 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:09.400995    8070 notify.go:220] Checking for updates...
	I0920 10:38:09.401225    8070 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:09.401239    8070 status.go:174] checking status of ha-279000 ...
	I0920 10:38:09.401545    8070 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:38:09.401550    8070 status.go:377] host is not running, skipping remaining checks
	I0920 10:38:09.401553    8070 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:38:09.402605    7279 retry.go:31] will retry after 2.212597201s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (76.269083ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:11.691551    8072 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:11.691746    8072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:11.691750    8072 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:11.691753    8072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:11.691935    8072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:11.692085    8072 out.go:352] Setting JSON to false
	I0920 10:38:11.692099    8072 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:11.692179    8072 notify.go:220] Checking for updates...
	I0920 10:38:11.692401    8072 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:11.692412    8072 status.go:174] checking status of ha-279000 ...
	I0920 10:38:11.692774    8072 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:38:11.692779    8072 status.go:377] host is not running, skipping remaining checks
	I0920 10:38:11.692782    8072 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:38:11.693872    7279 retry.go:31] will retry after 1.721798816s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (75.321125ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:13.491044    8074 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:13.491232    8074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:13.491236    8074 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:13.491239    8074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:13.491435    8074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:13.491608    8074 out.go:352] Setting JSON to false
	I0920 10:38:13.491622    8074 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:13.491671    8074 notify.go:220] Checking for updates...
	I0920 10:38:13.491915    8074 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:13.491926    8074 status.go:174] checking status of ha-279000 ...
	I0920 10:38:13.492258    8074 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:38:13.492263    8074 status.go:377] host is not running, skipping remaining checks
	I0920 10:38:13.492266    8074 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:38:13.493338    7279 retry.go:31] will retry after 4.553659661s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (73.066541ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:18.120248    8076 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:18.120436    8076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:18.120440    8076 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:18.120444    8076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:18.120627    8076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:18.120792    8076 out.go:352] Setting JSON to false
	I0920 10:38:18.120806    8076 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:18.120848    8076 notify.go:220] Checking for updates...
	I0920 10:38:18.121100    8076 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:18.121114    8076 status.go:174] checking status of ha-279000 ...
	I0920 10:38:18.121435    8076 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:38:18.121440    8076 status.go:377] host is not running, skipping remaining checks
	I0920 10:38:18.121442    8076 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:38:18.122490    7279 retry.go:31] will retry after 7.103706074s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (76.324167ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:25.301397    8078 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:25.301585    8078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:25.301589    8078 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:25.301592    8078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:25.301786    8078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:25.301961    8078 out.go:352] Setting JSON to false
	I0920 10:38:25.301976    8078 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:25.302007    8078 notify.go:220] Checking for updates...
	I0920 10:38:25.302270    8078 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:25.302285    8078 status.go:174] checking status of ha-279000 ...
	I0920 10:38:25.302593    8078 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:38:25.302598    8078 status.go:377] host is not running, skipping remaining checks
	I0920 10:38:25.302601    8078 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:38:25.303656    7279 retry.go:31] will retry after 11.125739603s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (74.196167ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:36.503692    8080 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:36.503891    8080 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:36.503897    8080 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:36.503901    8080 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:36.504056    8080 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:36.504208    8080 out.go:352] Setting JSON to false
	I0920 10:38:36.504224    8080 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:36.504277    8080 notify.go:220] Checking for updates...
	I0920 10:38:36.504495    8080 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:36.504508    8080 status.go:174] checking status of ha-279000 ...
	I0920 10:38:36.504828    8080 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:38:36.504833    8080 status.go:377] host is not running, skipping remaining checks
	I0920 10:38:36.504836    8080 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:38:36.505881    7279 retry.go:31] will retry after 11.747991138s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (74.881667ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:38:48.328991    8083 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:38:48.329179    8083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:48.329183    8083 out.go:358] Setting ErrFile to fd 2...
	I0920 10:38:48.329187    8083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:38:48.329344    8083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:38:48.329524    8083 out.go:352] Setting JSON to false
	I0920 10:38:48.329537    8083 mustload.go:65] Loading cluster: ha-279000
	I0920 10:38:48.329572    8083 notify.go:220] Checking for updates...
	I0920 10:38:48.329819    8083 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:38:48.329829    8083 status.go:174] checking status of ha-279000 ...
	I0920 10:38:48.330141    8083 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:38:48.330146    8083 status.go:377] host is not running, skipping remaining checks
	I0920 10:38:48.330149    8083 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:38:48.331215    7279 retry.go:31] will retry after 14.045557933s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (74.622583ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:39:02.451521    8090 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:39:02.451719    8090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:02.451723    8090 out.go:358] Setting ErrFile to fd 2...
	I0920 10:39:02.451726    8090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:02.451887    8090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:39:02.452062    8090 out.go:352] Setting JSON to false
	I0920 10:39:02.452082    8090 mustload.go:65] Loading cluster: ha-279000
	I0920 10:39:02.452121    8090 notify.go:220] Checking for updates...
	I0920 10:39:02.452344    8090 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:39:02.452356    8090 status.go:174] checking status of ha-279000 ...
	I0920 10:39:02.452688    8090 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:39:02.452693    8090 status.go:377] host is not running, skipping remaining checks
	I0920 10:39:02.452696    8090 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (34.215917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (54.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-279000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-279000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-279000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-279000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-279000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-279000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-279000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-279000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (31.117625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-279000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-279000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-279000 -v=7 --alsologtostderr: (2.877297917s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-279000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-279000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226445s)

                                                
                                                
-- stdout --
	* [ha-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-279000" primary control-plane node in "ha-279000" cluster
	* Restarting existing qemu2 VM for "ha-279000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-279000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:39:05.544168    8119 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:39:05.544342    8119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:05.544347    8119 out.go:358] Setting ErrFile to fd 2...
	I0920 10:39:05.544350    8119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:05.544534    8119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:39:05.545759    8119 out.go:352] Setting JSON to false
	I0920 10:39:05.564933    8119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5916,"bootTime":1726848029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:39:05.564996    8119 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:39:05.568729    8119 out.go:177] * [ha-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:39:05.575699    8119 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:39:05.575765    8119 notify.go:220] Checking for updates...
	I0920 10:39:05.583780    8119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:39:05.586642    8119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:39:05.589661    8119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:39:05.592752    8119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:39:05.595649    8119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:39:05.599026    8119 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:39:05.599080    8119 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:39:05.603676    8119 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:39:05.610662    8119 start.go:297] selected driver: qemu2
	I0920 10:39:05.610668    8119 start.go:901] validating driver "qemu2" against &{Name:ha-279000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-279000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:39:05.610719    8119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:39:05.613043    8119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:39:05.613072    8119 cni.go:84] Creating CNI manager for ""
	I0920 10:39:05.613103    8119 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 10:39:05.613149    8119 start.go:340] cluster config:
	{Name:ha-279000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-279000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:39:05.616906    8119 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:39:05.625658    8119 out.go:177] * Starting "ha-279000" primary control-plane node in "ha-279000" cluster
	I0920 10:39:05.629673    8119 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:39:05.629689    8119 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:39:05.629698    8119 cache.go:56] Caching tarball of preloaded images
	I0920 10:39:05.629797    8119 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:39:05.629807    8119 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:39:05.629873    8119 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/ha-279000/config.json ...
	I0920 10:39:05.630277    8119 start.go:360] acquireMachinesLock for ha-279000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:39:05.630321    8119 start.go:364] duration metric: took 36.208µs to acquireMachinesLock for "ha-279000"
	I0920 10:39:05.630333    8119 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:39:05.630338    8119 fix.go:54] fixHost starting: 
	I0920 10:39:05.630489    8119 fix.go:112] recreateIfNeeded on ha-279000: state=Stopped err=<nil>
	W0920 10:39:05.630498    8119 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:39:05.638669    8119 out.go:177] * Restarting existing qemu2 VM for "ha-279000" ...
	I0920 10:39:05.642538    8119 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:39:05.642585    8119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:fe:37:ad:cb:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2
	I0920 10:39:05.644787    8119 main.go:141] libmachine: STDOUT: 
	I0920 10:39:05.644813    8119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:39:05.644847    8119 fix.go:56] duration metric: took 14.507125ms for fixHost
	I0920 10:39:05.644852    8119 start.go:83] releasing machines lock for "ha-279000", held for 14.526ms
	W0920 10:39:05.644859    8119 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:39:05.644895    8119 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:39:05.644900    8119 start.go:729] Will try again in 5 seconds ...
	I0920 10:39:10.647176    8119 start.go:360] acquireMachinesLock for ha-279000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:39:10.647657    8119 start.go:364] duration metric: took 359.666µs to acquireMachinesLock for "ha-279000"
	I0920 10:39:10.647820    8119 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:39:10.647839    8119 fix.go:54] fixHost starting: 
	I0920 10:39:10.648550    8119 fix.go:112] recreateIfNeeded on ha-279000: state=Stopped err=<nil>
	W0920 10:39:10.648576    8119 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:39:10.653101    8119 out.go:177] * Restarting existing qemu2 VM for "ha-279000" ...
	I0920 10:39:10.657080    8119 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:39:10.657289    8119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:fe:37:ad:cb:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2
	I0920 10:39:10.666834    8119 main.go:141] libmachine: STDOUT: 
	I0920 10:39:10.666906    8119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:39:10.667017    8119 fix.go:56] duration metric: took 19.177584ms for fixHost
	I0920 10:39:10.667034    8119 start.go:83] releasing machines lock for "ha-279000", held for 19.354625ms
	W0920 10:39:10.667193    8119 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-279000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-279000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:39:10.675078    8119 out.go:201] 
	W0920 10:39:10.678966    8119 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:39:10.678993    8119 out.go:270] * 
	* 
	W0920 10:39:10.681592    8119 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:39:10.690071    8119 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-279000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-279000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (34.013792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 node delete m03 -v=7 --alsologtostderr: exit status 83 (43.447292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-279000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-279000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:39:10.838891    8131 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:39:10.839307    8131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:10.839311    8131 out.go:358] Setting ErrFile to fd 2...
	I0920 10:39:10.839313    8131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:10.839456    8131 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:39:10.839687    8131 mustload.go:65] Loading cluster: ha-279000
	I0920 10:39:10.839909    8131 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:39:10.844489    8131 out.go:177] * The control-plane node ha-279000 host is not running: state=Stopped
	I0920 10:39:10.847492    8131 out.go:177]   To start a cluster, run: "minikube start -p ha-279000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-279000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (33.086375ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:39:10.882602    8133 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:39:10.882781    8133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:10.882787    8133 out.go:358] Setting ErrFile to fd 2...
	I0920 10:39:10.882789    8133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:10.882935    8133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:39:10.883066    8133 out.go:352] Setting JSON to false
	I0920 10:39:10.883078    8133 mustload.go:65] Loading cluster: ha-279000
	I0920 10:39:10.883117    8133 notify.go:220] Checking for updates...
	I0920 10:39:10.883275    8133 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:39:10.883285    8133 status.go:174] checking status of ha-279000 ...
	I0920 10:39:10.883531    8133 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:39:10.883535    8133 status.go:377] host is not running, skipping remaining checks
	I0920 10:39:10.883538    8133 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (33.287541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-279000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-279000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-279000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-279000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (29.997375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-279000 stop -v=7 --alsologtostderr: (3.276202375s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr: exit status 7 (68.532625ms)

                                                
                                                
-- stdout --
	ha-279000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:39:14.340554    8160 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:39:14.340759    8160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:14.340764    8160 out.go:358] Setting ErrFile to fd 2...
	I0920 10:39:14.340767    8160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:14.340947    8160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:39:14.341087    8160 out.go:352] Setting JSON to false
	I0920 10:39:14.341100    8160 mustload.go:65] Loading cluster: ha-279000
	I0920 10:39:14.341125    8160 notify.go:220] Checking for updates...
	I0920 10:39:14.341331    8160 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:39:14.341340    8160 status.go:174] checking status of ha-279000 ...
	I0920 10:39:14.341630    8160 status.go:364] ha-279000 host status = "Stopped" (err=<nil>)
	I0920 10:39:14.341634    8160 status.go:377] host is not running, skipping remaining checks
	I0920 10:39:14.341637    8160 status.go:176] ha-279000 status: &{Name:ha-279000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr": ha-279000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr": ha-279000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-279000 status -v=7 --alsologtostderr": ha-279000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (32.540916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-279000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-279000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.187201583s)

                                                
                                                
-- stdout --
	* [ha-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-279000" primary control-plane node in "ha-279000" cluster
	* Restarting existing qemu2 VM for "ha-279000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-279000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:39:14.403975    8164 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:39:14.404102    8164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:14.404106    8164 out.go:358] Setting ErrFile to fd 2...
	I0920 10:39:14.404109    8164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:14.404244    8164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:39:14.405202    8164 out.go:352] Setting JSON to false
	I0920 10:39:14.421445    8164 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5925,"bootTime":1726848029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:39:14.421549    8164 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:39:14.426581    8164 out.go:177] * [ha-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:39:14.434537    8164 notify.go:220] Checking for updates...
	I0920 10:39:14.438489    8164 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:39:14.442437    8164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:39:14.445454    8164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:39:14.448555    8164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:39:14.451496    8164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:39:14.454472    8164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:39:14.457857    8164 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:39:14.458152    8164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:39:14.462414    8164 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:39:14.469498    8164 start.go:297] selected driver: qemu2
	I0920 10:39:14.469505    8164 start.go:901] validating driver "qemu2" against &{Name:ha-279000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-279000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:39:14.469569    8164 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:39:14.471991    8164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:39:14.472016    8164 cni.go:84] Creating CNI manager for ""
	I0920 10:39:14.472046    8164 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 10:39:14.472105    8164 start.go:340] cluster config:
	{Name:ha-279000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-279000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:39:14.475711    8164 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:39:14.484458    8164 out.go:177] * Starting "ha-279000" primary control-plane node in "ha-279000" cluster
	I0920 10:39:14.488313    8164 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:39:14.488327    8164 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:39:14.488332    8164 cache.go:56] Caching tarball of preloaded images
	I0920 10:39:14.488379    8164 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:39:14.488385    8164 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:39:14.488440    8164 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/ha-279000/config.json ...
	I0920 10:39:14.488869    8164 start.go:360] acquireMachinesLock for ha-279000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:39:14.488897    8164 start.go:364] duration metric: took 22.084µs to acquireMachinesLock for "ha-279000"
	I0920 10:39:14.488907    8164 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:39:14.488911    8164 fix.go:54] fixHost starting: 
	I0920 10:39:14.489030    8164 fix.go:112] recreateIfNeeded on ha-279000: state=Stopped err=<nil>
	W0920 10:39:14.489038    8164 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:39:14.497459    8164 out.go:177] * Restarting existing qemu2 VM for "ha-279000" ...
	I0920 10:39:14.501478    8164 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:39:14.501524    8164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:fe:37:ad:cb:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2
	I0920 10:39:14.503530    8164 main.go:141] libmachine: STDOUT: 
	I0920 10:39:14.503559    8164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:39:14.503587    8164 fix.go:56] duration metric: took 14.67425ms for fixHost
	I0920 10:39:14.503592    8164 start.go:83] releasing machines lock for "ha-279000", held for 14.691083ms
	W0920 10:39:14.503599    8164 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:39:14.503630    8164 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:39:14.503635    8164 start.go:729] Will try again in 5 seconds ...
	I0920 10:39:19.505761    8164 start.go:360] acquireMachinesLock for ha-279000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:39:19.506093    8164 start.go:364] duration metric: took 265.5µs to acquireMachinesLock for "ha-279000"
	I0920 10:39:19.506231    8164 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:39:19.506251    8164 fix.go:54] fixHost starting: 
	I0920 10:39:19.506911    8164 fix.go:112] recreateIfNeeded on ha-279000: state=Stopped err=<nil>
	W0920 10:39:19.506942    8164 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:39:19.511410    8164 out.go:177] * Restarting existing qemu2 VM for "ha-279000" ...
	I0920 10:39:19.515406    8164 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:39:19.515557    8164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:fe:37:ad:cb:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/ha-279000/disk.qcow2
	I0920 10:39:19.524387    8164 main.go:141] libmachine: STDOUT: 
	I0920 10:39:19.524437    8164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:39:19.524508    8164 fix.go:56] duration metric: took 18.255458ms for fixHost
	I0920 10:39:19.524527    8164 start.go:83] releasing machines lock for "ha-279000", held for 18.41175ms
	W0920 10:39:19.524707    8164 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-279000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-279000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:39:19.535316    8164 out.go:201] 
	W0920 10:39:19.539370    8164 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:39:19.539396    8164 out.go:270] * 
	* 
	W0920 10:39:19.542054    8164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:39:19.549295    8164 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-279000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (68.420584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-279000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-279000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-279000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-279000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (30.831666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-279000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-279000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.629416ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-279000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-279000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:39:19.742885    8179 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:39:19.743043    8179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:19.743047    8179 out.go:358] Setting ErrFile to fd 2...
	I0920 10:39:19.743049    8179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:39:19.743193    8179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:39:19.743450    8179 mustload.go:65] Loading cluster: ha-279000
	I0920 10:39:19.743690    8179 config.go:182] Loaded profile config "ha-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:39:19.747607    8179 out.go:177] * The control-plane node ha-279000 host is not running: state=Stopped
	I0920 10:39:19.751589    8179 out.go:177]   To start a cluster, run: "minikube start -p ha-279000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-279000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (30.960083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-279000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-279000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-279000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-279000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-279000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-279000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-279000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-279000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-279000 -n ha-279000: exit status 7 (31.10625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-537000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-537000 --driver=qemu2 : exit status 80 (9.938354708s)

                                                
                                                
-- stdout --
	* [image-537000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-537000" primary control-plane node in "image-537000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-537000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-537000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-537000 -n image-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-537000 -n image-537000: exit status 7 (68.198625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-517000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-517000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.837224625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"42b75097-86f8-4792-ab58-4838dc9249ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-517000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9ec9739-cb49-406f-a890-dff49cc5f392","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"477c1a83-6357-4ee0-95ee-063555d5d798","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig"}}
	{"specversion":"1.0","id":"3875a57c-4f0a-4d2c-96db-7147971f6434","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"846351c7-45ec-4793-9337-61d8e9490b2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4f115c81-ae31-423f-9f56-3f5de992e660","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube"}}
	{"specversion":"1.0","id":"a3a41838-f5dd-4563-b2ce-e9fbfae2b1e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f757adea-14d1-4895-b437-ba299b5c531d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"58615611-8fd1-4b59-99cf-57a95f9b43f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"73a1963d-d780-4d2d-a1b9-e03331c7c588","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-517000\" primary control-plane node in \"json-output-517000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e46a9980-42a3-41fe-ac00-e8cb3bc4b3d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c83e6b28-270c-4f28-8b6d-ad0d25391f98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-517000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f0c3fc1-7cb5-4bda-b3be-8351f0da9fe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"6d480d98-20b9-463f-b9ab-cd1f97fc9c41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"ed1bffe1-cbda-44fd-974b-35e208088ded","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-517000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"b49d8b5f-07d5-4a08-9112-9475217cd046","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"b5c7299c-24c7-4f84-98a5-bb3a0c0788a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-517000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-517000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-517000 --output=json --user=testUser: exit status 83 (79.774791ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"98f9c175-1dc7-4368-b518-ed71125eb0c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-517000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"6afba49d-b806-4735-9df8-a429cae0d69d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-517000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-517000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-517000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-517000 --output=json --user=testUser: exit status 83 (46.294709ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-517000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-517000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-517000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-517000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-605000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-605000 --driver=qemu2 : exit status 80 (9.924128083s)

                                                
                                                
-- stdout --
	* [first-605000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-605000" primary control-plane node in "first-605000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-605000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-605000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-605000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-20 10:39:54.179347 -0700 PDT m=+470.034427917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-607000 -n second-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-607000 -n second-607000: exit status 85 (78.612667ms)

                                                
                                                
-- stdout --
	* Profile "second-607000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-607000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-607000" host is not running, skipping log retrieval (state="* Profile \"second-607000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-607000\"")
helpers_test.go:175: Cleaning up "second-607000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-607000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-20 10:39:54.366915 -0700 PDT m=+470.221996084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-605000 -n first-605000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-605000 -n first-605000: exit status 7 (30.878916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-605000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-605000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-605000
--- FAIL: TestMinikubeProfile (10.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-786000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-786000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.141250208s)

                                                
                                                
-- stdout --
	* [mount-start-1-786000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-786000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-786000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-786000 -n mount-start-1-786000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-786000 -n mount-start-1-786000: exit status 7 (69.263083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-786000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.21s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-101000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-101000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.854605375s)

                                                
                                                
-- stdout --
	* [multinode-101000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-101000" primary control-plane node in "multinode-101000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-101000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:40:04.908881    8331 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:40:04.909027    8331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:04.909031    8331 out.go:358] Setting ErrFile to fd 2...
	I0920 10:40:04.909033    8331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:40:04.909171    8331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:40:04.910297    8331 out.go:352] Setting JSON to false
	I0920 10:40:04.926211    8331 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5975,"bootTime":1726848029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:40:04.926294    8331 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:40:04.933126    8331 out.go:177] * [multinode-101000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:40:04.941123    8331 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:40:04.941197    8331 notify.go:220] Checking for updates...
	I0920 10:40:04.948052    8331 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:40:04.951098    8331 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:40:04.954025    8331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:40:04.957069    8331 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:40:04.960068    8331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:40:04.961659    8331 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:40:04.966045    8331 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:40:04.972905    8331 start.go:297] selected driver: qemu2
	I0920 10:40:04.972910    8331 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:40:04.972915    8331 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:40:04.975182    8331 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:40:04.979075    8331 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:40:04.982118    8331 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:40:04.982134    8331 cni.go:84] Creating CNI manager for ""
	I0920 10:40:04.982153    8331 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 10:40:04.982157    8331 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 10:40:04.982189    8331 start.go:340] cluster config:
	{Name:multinode-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:40:04.985874    8331 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:40:04.993082    8331 out.go:177] * Starting "multinode-101000" primary control-plane node in "multinode-101000" cluster
	I0920 10:40:04.997073    8331 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:40:04.997089    8331 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:40:04.997104    8331 cache.go:56] Caching tarball of preloaded images
	I0920 10:40:04.997173    8331 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:40:04.997180    8331 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:40:04.997395    8331 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/multinode-101000/config.json ...
	I0920 10:40:04.997407    8331 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/multinode-101000/config.json: {Name:mk25638a08a6d7ccc2e20bfebd6f03186d96802c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:40:04.997631    8331 start.go:360] acquireMachinesLock for multinode-101000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:40:04.997667    8331 start.go:364] duration metric: took 29.041µs to acquireMachinesLock for "multinode-101000"
	I0920 10:40:04.997681    8331 start.go:93] Provisioning new machine with config: &{Name:multinode-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:40:04.997713    8331 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:40:05.005122    8331 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:40:05.023543    8331 start.go:159] libmachine.API.Create for "multinode-101000" (driver="qemu2")
	I0920 10:40:05.023577    8331 client.go:168] LocalClient.Create starting
	I0920 10:40:05.023634    8331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:40:05.023664    8331 main.go:141] libmachine: Decoding PEM data...
	I0920 10:40:05.023674    8331 main.go:141] libmachine: Parsing certificate...
	I0920 10:40:05.023713    8331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:40:05.023740    8331 main.go:141] libmachine: Decoding PEM data...
	I0920 10:40:05.023750    8331 main.go:141] libmachine: Parsing certificate...
	I0920 10:40:05.024192    8331 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:40:05.187047    8331 main.go:141] libmachine: Creating SSH key...
	I0920 10:40:05.234008    8331 main.go:141] libmachine: Creating Disk image...
	I0920 10:40:05.234014    8331 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:40:05.234211    8331 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2
	I0920 10:40:05.243313    8331 main.go:141] libmachine: STDOUT: 
	I0920 10:40:05.243329    8331 main.go:141] libmachine: STDERR: 
	I0920 10:40:05.243391    8331 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2 +20000M
	I0920 10:40:05.251233    8331 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:40:05.251247    8331 main.go:141] libmachine: STDERR: 
	I0920 10:40:05.251259    8331 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2
	I0920 10:40:05.251274    8331 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:40:05.251288    8331 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:40:05.251324    8331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:9a:6c:90:1a:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2
	I0920 10:40:05.252933    8331 main.go:141] libmachine: STDOUT: 
	I0920 10:40:05.252946    8331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:40:05.252965    8331 client.go:171] duration metric: took 229.38275ms to LocalClient.Create
	I0920 10:40:07.255124    8331 start.go:128] duration metric: took 2.257398125s to createHost
	I0920 10:40:07.255181    8331 start.go:83] releasing machines lock for "multinode-101000", held for 2.257512917s
	W0920 10:40:07.255251    8331 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:40:07.269635    8331 out.go:177] * Deleting "multinode-101000" in qemu2 ...
	W0920 10:40:07.301979    8331 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:40:07.301996    8331 start.go:729] Will try again in 5 seconds ...
	I0920 10:40:12.304263    8331 start.go:360] acquireMachinesLock for multinode-101000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:40:12.304682    8331 start.go:364] duration metric: took 332.833µs to acquireMachinesLock for "multinode-101000"
	I0920 10:40:12.304790    8331 start.go:93] Provisioning new machine with config: &{Name:multinode-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:40:12.305079    8331 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:40:12.322793    8331 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:40:12.374255    8331 start.go:159] libmachine.API.Create for "multinode-101000" (driver="qemu2")
	I0920 10:40:12.374319    8331 client.go:168] LocalClient.Create starting
	I0920 10:40:12.374455    8331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:40:12.374515    8331 main.go:141] libmachine: Decoding PEM data...
	I0920 10:40:12.374533    8331 main.go:141] libmachine: Parsing certificate...
	I0920 10:40:12.374599    8331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:40:12.374645    8331 main.go:141] libmachine: Decoding PEM data...
	I0920 10:40:12.374658    8331 main.go:141] libmachine: Parsing certificate...
	I0920 10:40:12.375148    8331 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:40:12.547318    8331 main.go:141] libmachine: Creating SSH key...
	I0920 10:40:12.667733    8331 main.go:141] libmachine: Creating Disk image...
	I0920 10:40:12.667738    8331 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:40:12.667930    8331 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2
	I0920 10:40:12.677465    8331 main.go:141] libmachine: STDOUT: 
	I0920 10:40:12.677492    8331 main.go:141] libmachine: STDERR: 
	I0920 10:40:12.677558    8331 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2 +20000M
	I0920 10:40:12.685560    8331 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:40:12.685576    8331 main.go:141] libmachine: STDERR: 
	I0920 10:40:12.685586    8331 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2
	I0920 10:40:12.685590    8331 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:40:12.685602    8331 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:40:12.685653    8331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:cd:b0:46:6e:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2
	I0920 10:40:12.687317    8331 main.go:141] libmachine: STDOUT: 
	I0920 10:40:12.687332    8331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:40:12.687344    8331 client.go:171] duration metric: took 313.019917ms to LocalClient.Create
	I0920 10:40:14.689520    8331 start.go:128] duration metric: took 2.384418292s to createHost
	I0920 10:40:14.689583    8331 start.go:83] releasing machines lock for "multinode-101000", held for 2.38488725s
	W0920 10:40:14.690003    8331 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-101000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-101000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:40:14.704634    8331 out.go:201] 
	W0920 10:40:14.707826    8331 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:40:14.707851    8331 out.go:270] * 
	* 
	W0920 10:40:14.710395    8331 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:40:14.720596    8331 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-101000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (69.431459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (117.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.524458ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-101000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- rollout status deployment/busybox: exit status 1 (58.166625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.617709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:40:14.982282    7279 retry.go:31] will retry after 1.36467538s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.586167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:40:16.455848    7279 retry.go:31] will retry after 1.194092764s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.136834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:40:17.760479    7279 retry.go:31] will retry after 2.091063374s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.451792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:40:19.957321    7279 retry.go:31] will retry after 4.663374446s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.407625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:40:24.726436    7279 retry.go:31] will retry after 5.483544594s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.903291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:40:30.315247    7279 retry.go:31] will retry after 6.399021918s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.722625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:40:36.821285    7279 retry.go:31] will retry after 6.242119087s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.768625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:40:43.172438    7279 retry.go:31] will retry after 25.030950665s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.472292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:41:08.313289    7279 retry.go:31] will retry after 36.358210534s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.4895ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0920 10:41:44.781562    7279 retry.go:31] will retry after 26.664811173s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.57725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.603125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.486083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.614917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.239917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (30.883375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (117.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-101000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.530333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (31.0715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-101000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-101000 -v 3 --alsologtostderr: exit status 83 (45.034083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-101000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-101000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:11.931959    8411 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:11.932119    8411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:11.932122    8411 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:11.932125    8411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:11.932257    8411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:11.932482    8411 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:11.932705    8411 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:11.937918    8411 out.go:177] * The control-plane node multinode-101000 host is not running: state=Stopped
	I0920 10:42:11.942820    8411 out.go:177]   To start a cluster, run: "minikube start -p multinode-101000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-101000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (30.950625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-101000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-101000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.319416ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-101000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-101000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-101000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (31.099291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-101000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-101000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-101000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-101000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (30.239208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status --output json --alsologtostderr: exit status 7 (30.208875ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-101000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:12.144380    8423 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:12.144542    8423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:12.144545    8423 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:12.144547    8423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:12.144688    8423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:12.144814    8423 out.go:352] Setting JSON to true
	I0920 10:42:12.144825    8423 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:12.144882    8423 notify.go:220] Checking for updates...
	I0920 10:42:12.145077    8423 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:12.145086    8423 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:12.145302    8423 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:12.145305    8423 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:12.145307    8423 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-101000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (30.693792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 node stop m03: exit status 85 (46.708291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-101000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status: exit status 7 (30.933166ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status --alsologtostderr: exit status 7 (30.977375ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:12.284529    8431 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:12.284687    8431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:12.284691    8431 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:12.284693    8431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:12.284843    8431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:12.284965    8431 out.go:352] Setting JSON to false
	I0920 10:42:12.284976    8431 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:12.285033    8431 notify.go:220] Checking for updates...
	I0920 10:42:12.285212    8431 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:12.285226    8431 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:12.285460    8431 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:12.285464    8431 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:12.285466    8431 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-101000 status --alsologtostderr": multinode-101000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (31.192208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.383875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:12.346822    8435 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:12.347227    8435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:12.347231    8435 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:12.347233    8435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:12.347410    8435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:12.347636    8435 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:12.347824    8435 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:12.352248    8435 out.go:201] 
	W0920 10:42:12.356407    8435 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0920 10:42:12.356413    8435 out.go:270] * 
	* 
	W0920 10:42:12.358455    8435 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:42:12.362409    8435 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0920 10:42:12.346822    8435 out.go:345] Setting OutFile to fd 1 ...
I0920 10:42:12.347227    8435 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:42:12.347231    8435 out.go:358] Setting ErrFile to fd 2...
I0920 10:42:12.347233    8435 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 10:42:12.347410    8435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
I0920 10:42:12.347636    8435 mustload.go:65] Loading cluster: multinode-101000
I0920 10:42:12.347824    8435 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 10:42:12.352248    8435 out.go:201] 
W0920 10:42:12.356407    8435 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0920 10:42:12.356413    8435 out.go:270] * 
* 
W0920 10:42:12.358455    8435 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0920 10:42:12.362409    8435 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-101000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr: exit status 7 (31.134542ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:12.396763    8437 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:12.396913    8437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:12.396917    8437 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:12.396919    8437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:12.397061    8437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:12.397195    8437 out.go:352] Setting JSON to false
	I0920 10:42:12.397206    8437 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:12.397278    8437 notify.go:220] Checking for updates...
	I0920 10:42:12.397411    8437 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:12.397420    8437 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:12.397689    8437 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:12.397693    8437 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:12.397695    8437 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:42:12.398524    7279 retry.go:31] will retry after 1.115690034s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr: exit status 7 (76.594959ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:13.590954    8439 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:13.591168    8439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:13.591172    8439 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:13.591176    8439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:13.591363    8439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:13.591517    8439 out.go:352] Setting JSON to false
	I0920 10:42:13.591531    8439 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:13.591573    8439 notify.go:220] Checking for updates...
	I0920 10:42:13.591831    8439 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:13.591851    8439 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:13.592161    8439 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:13.592166    8439 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:13.592171    8439 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:42:13.593219    7279 retry.go:31] will retry after 1.308496419s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr: exit status 7 (74.086083ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:14.976104    8441 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:14.976296    8441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:14.976301    8441 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:14.976303    8441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:14.976475    8441 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:14.976634    8441 out.go:352] Setting JSON to false
	I0920 10:42:14.976649    8441 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:14.976689    8441 notify.go:220] Checking for updates...
	I0920 10:42:14.976909    8441 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:14.976920    8441 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:14.977227    8441 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:14.977232    8441 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:14.977235    8441 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:42:14.978362    7279 retry.go:31] will retry after 1.435471385s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr: exit status 7 (73.887834ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:16.488043    8443 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:16.488232    8443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:16.488237    8443 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:16.488240    8443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:16.488404    8443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:16.488561    8443 out.go:352] Setting JSON to false
	I0920 10:42:16.488577    8443 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:16.488617    8443 notify.go:220] Checking for updates...
	I0920 10:42:16.488836    8443 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:16.488849    8443 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:16.489187    8443 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:16.489192    8443 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:16.489195    8443 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:42:16.490254    7279 retry.go:31] will retry after 2.37391696s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr: exit status 7 (66.308375ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:18.930256    8445 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:18.930518    8445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:18.930522    8445 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:18.930526    8445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:18.930697    8445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:18.930900    8445 out.go:352] Setting JSON to false
	I0920 10:42:18.930916    8445 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:18.930968    8445 notify.go:220] Checking for updates...
	I0920 10:42:18.931222    8445 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:18.931238    8445 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:18.931634    8445 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:18.931640    8445 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:18.931643    8445 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:42:18.932831    7279 retry.go:31] will retry after 3.787711954s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr: exit status 7 (75.946084ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:22.796629    8447 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:22.796833    8447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:22.796840    8447 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:22.796842    8447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:22.797014    8447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:22.797170    8447 out.go:352] Setting JSON to false
	I0920 10:42:22.797185    8447 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:22.797224    8447 notify.go:220] Checking for updates...
	I0920 10:42:22.797440    8447 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:22.797455    8447 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:22.797791    8447 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:22.797796    8447 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:22.797798    8447 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:42:22.798994    7279 retry.go:31] will retry after 8.181140976s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr: exit status 7 (76.357541ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:31.056613    8452 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:31.056843    8452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:31.056847    8452 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:31.056851    8452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:31.057009    8452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:31.057188    8452 out.go:352] Setting JSON to false
	I0920 10:42:31.057203    8452 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:31.057251    8452 notify.go:220] Checking for updates...
	I0920 10:42:31.057476    8452 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:31.057487    8452 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:31.057801    8452 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:31.057807    8452 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:31.057809    8452 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:42:31.059036    7279 retry.go:31] will retry after 10.299014184s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr: exit status 7 (78.2975ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:41.436318    8454 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:41.436517    8454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:41.436522    8454 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:41.436525    8454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:41.436707    8454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:41.436882    8454 out.go:352] Setting JSON to false
	I0920 10:42:41.436897    8454 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:41.436938    8454 notify.go:220] Checking for updates...
	I0920 10:42:41.437211    8454 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:41.437223    8454 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:41.437590    8454 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:41.437595    8454 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:41.437598    8454 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0920 10:42:41.438835    7279 retry.go:31] will retry after 16.43305063s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr: exit status 7 (75.247208ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:42:57.947503    8456 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:42:57.947668    8456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:57.947673    8456 out.go:358] Setting ErrFile to fd 2...
	I0920 10:42:57.947676    8456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:42:57.947826    8456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:42:57.947993    8456 out.go:352] Setting JSON to false
	I0920 10:42:57.948011    8456 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:42:57.948053    8456 notify.go:220] Checking for updates...
	I0920 10:42:57.948267    8456 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:42:57.948280    8456 status.go:174] checking status of multinode-101000 ...
	I0920 10:42:57.948593    8456 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:42:57.948598    8456 status.go:377] host is not running, skipping remaining checks
	I0920 10:42:57.948601    8456 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-101000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (33.068791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-101000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-101000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-101000: (2.673852292s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-101000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-101000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.225868125s)

                                                
                                                
-- stdout --
	* [multinode-101000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-101000" primary control-plane node in "multinode-101000" cluster
	* Restarting existing qemu2 VM for "multinode-101000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-101000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:00.753798    8480 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:00.753951    8480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:00.753959    8480 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:00.753962    8480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:00.754125    8480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:43:00.755357    8480 out.go:352] Setting JSON to false
	I0920 10:43:00.774551    8480 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6151,"bootTime":1726848029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:43:00.774631    8480 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:43:00.779378    8480 out.go:177] * [multinode-101000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:43:00.786405    8480 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:43:00.786443    8480 notify.go:220] Checking for updates...
	I0920 10:43:00.794307    8480 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:43:00.797259    8480 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:43:00.800312    8480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:43:00.803344    8480 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:43:00.806276    8480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:43:00.809684    8480 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:00.809737    8480 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:43:00.814303    8480 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:43:00.821298    8480 start.go:297] selected driver: qemu2
	I0920 10:43:00.821311    8480 start.go:901] validating driver "qemu2" against &{Name:multinode-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:43:00.821360    8480 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:43:00.823640    8480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:43:00.823667    8480 cni.go:84] Creating CNI manager for ""
	I0920 10:43:00.823696    8480 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 10:43:00.823742    8480 start.go:340] cluster config:
	{Name:multinode-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-101000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:43:00.827292    8480 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:00.834298    8480 out.go:177] * Starting "multinode-101000" primary control-plane node in "multinode-101000" cluster
	I0920 10:43:00.838288    8480 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:43:00.838302    8480 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:43:00.838308    8480 cache.go:56] Caching tarball of preloaded images
	I0920 10:43:00.838388    8480 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:43:00.838396    8480 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:43:00.838457    8480 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/multinode-101000/config.json ...
	I0920 10:43:00.838919    8480 start.go:360] acquireMachinesLock for multinode-101000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:43:00.838959    8480 start.go:364] duration metric: took 33.333µs to acquireMachinesLock for "multinode-101000"
	I0920 10:43:00.838970    8480 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:43:00.838974    8480 fix.go:54] fixHost starting: 
	I0920 10:43:00.839108    8480 fix.go:112] recreateIfNeeded on multinode-101000: state=Stopped err=<nil>
	W0920 10:43:00.839117    8480 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:43:00.843275    8480 out.go:177] * Restarting existing qemu2 VM for "multinode-101000" ...
	I0920 10:43:00.851338    8480 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:43:00.851381    8480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:cd:b0:46:6e:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2
	I0920 10:43:00.853540    8480 main.go:141] libmachine: STDOUT: 
	I0920 10:43:00.853559    8480 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:43:00.853585    8480 fix.go:56] duration metric: took 14.608875ms for fixHost
	I0920 10:43:00.853590    8480 start.go:83] releasing machines lock for "multinode-101000", held for 14.625917ms
	W0920 10:43:00.853597    8480 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:43:00.853628    8480 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:43:00.853633    8480 start.go:729] Will try again in 5 seconds ...
	I0920 10:43:05.855752    8480 start.go:360] acquireMachinesLock for multinode-101000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:43:05.856131    8480 start.go:364] duration metric: took 310.125µs to acquireMachinesLock for "multinode-101000"
	I0920 10:43:05.856259    8480 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:43:05.856279    8480 fix.go:54] fixHost starting: 
	I0920 10:43:05.857112    8480 fix.go:112] recreateIfNeeded on multinode-101000: state=Stopped err=<nil>
	W0920 10:43:05.857139    8480 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:43:05.865493    8480 out.go:177] * Restarting existing qemu2 VM for "multinode-101000" ...
	I0920 10:43:05.869458    8480 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:43:05.869698    8480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:cd:b0:46:6e:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2
	I0920 10:43:05.878545    8480 main.go:141] libmachine: STDOUT: 
	I0920 10:43:05.878601    8480 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:43:05.878659    8480 fix.go:56] duration metric: took 22.380875ms for fixHost
	I0920 10:43:05.878675    8480 start.go:83] releasing machines lock for "multinode-101000", held for 22.524958ms
	W0920 10:43:05.878856    8480 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-101000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-101000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:43:05.885537    8480 out.go:201] 
	W0920 10:43:05.889577    8480 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:43:05.889608    8480 out.go:270] * 
	* 
	W0920 10:43:05.892143    8480 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:43:05.900411    8480 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-101000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-101000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (33.77425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 node delete m03: exit status 83 (43.514833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-101000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-101000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-101000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status --alsologtostderr: exit status 7 (30.602583ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:06.093987    8494 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:06.094138    8494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:06.094145    8494 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:06.094148    8494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:06.094287    8494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:43:06.094410    8494 out.go:352] Setting JSON to false
	I0920 10:43:06.094420    8494 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:43:06.094484    8494 notify.go:220] Checking for updates...
	I0920 10:43:06.094646    8494 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:06.094655    8494 status.go:174] checking status of multinode-101000 ...
	I0920 10:43:06.094899    8494 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:43:06.094903    8494 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:06.094904    8494 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-101000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (31.546458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-101000 stop: (2.78483525s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status: exit status 7 (66.465041ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-101000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-101000 status --alsologtostderr: exit status 7 (33.340083ms)

                                                
                                                
-- stdout --
	multinode-101000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:09.011011    8518 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:09.011170    8518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:09.011173    8518 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:09.011175    8518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:09.011312    8518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:43:09.011426    8518 out.go:352] Setting JSON to false
	I0920 10:43:09.011436    8518 mustload.go:65] Loading cluster: multinode-101000
	I0920 10:43:09.011493    8518 notify.go:220] Checking for updates...
	I0920 10:43:09.011657    8518 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:09.011665    8518 status.go:174] checking status of multinode-101000 ...
	I0920 10:43:09.011897    8518 status.go:364] multinode-101000 host status = "Stopped" (err=<nil>)
	I0920 10:43:09.011900    8518 status.go:377] host is not running, skipping remaining checks
	I0920 10:43:09.011902    8518 status.go:176] multinode-101000 status: &{Name:multinode-101000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-101000 status --alsologtostderr": multinode-101000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-101000 status --alsologtostderr": multinode-101000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (30.875084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-101000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-101000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.184067708s)

                                                
                                                
-- stdout --
	* [multinode-101000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-101000" primary control-plane node in "multinode-101000" cluster
	* Restarting existing qemu2 VM for "multinode-101000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-101000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:09.072016    8522 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:09.072149    8522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:09.072153    8522 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:09.072155    8522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:09.072286    8522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:43:09.073297    8522 out.go:352] Setting JSON to false
	I0920 10:43:09.089298    8522 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6160,"bootTime":1726848029,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:43:09.089366    8522 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:43:09.094308    8522 out.go:177] * [multinode-101000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:43:09.104382    8522 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:43:09.104426    8522 notify.go:220] Checking for updates...
	I0920 10:43:09.112235    8522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:43:09.116121    8522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:43:09.119250    8522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:43:09.122255    8522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:43:09.125369    8522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:43:09.128615    8522 config.go:182] Loaded profile config "multinode-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:09.128904    8522 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:43:09.133271    8522 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:43:09.140213    8522 start.go:297] selected driver: qemu2
	I0920 10:43:09.140222    8522 start.go:901] validating driver "qemu2" against &{Name:multinode-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:43:09.140286    8522 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:43:09.142546    8522 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:43:09.142574    8522 cni.go:84] Creating CNI manager for ""
	I0920 10:43:09.142605    8522 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 10:43:09.142651    8522 start.go:340] cluster config:
	{Name:multinode-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-101000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:43:09.146170    8522 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:09.152247    8522 out.go:177] * Starting "multinode-101000" primary control-plane node in "multinode-101000" cluster
	I0920 10:43:09.156209    8522 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:43:09.156224    8522 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:43:09.156227    8522 cache.go:56] Caching tarball of preloaded images
	I0920 10:43:09.156283    8522 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:43:09.156289    8522 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:43:09.156346    8522 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/multinode-101000/config.json ...
	I0920 10:43:09.156808    8522 start.go:360] acquireMachinesLock for multinode-101000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:43:09.156838    8522 start.go:364] duration metric: took 23.167µs to acquireMachinesLock for "multinode-101000"
	I0920 10:43:09.156848    8522 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:43:09.156853    8522 fix.go:54] fixHost starting: 
	I0920 10:43:09.156974    8522 fix.go:112] recreateIfNeeded on multinode-101000: state=Stopped err=<nil>
	W0920 10:43:09.156982    8522 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:43:09.164214    8522 out.go:177] * Restarting existing qemu2 VM for "multinode-101000" ...
	I0920 10:43:09.168215    8522 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:43:09.168256    8522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:cd:b0:46:6e:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2
	I0920 10:43:09.170207    8522 main.go:141] libmachine: STDOUT: 
	I0920 10:43:09.170224    8522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:43:09.170260    8522 fix.go:56] duration metric: took 13.405708ms for fixHost
	I0920 10:43:09.170264    8522 start.go:83] releasing machines lock for "multinode-101000", held for 13.422ms
	W0920 10:43:09.170271    8522 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:43:09.170306    8522 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:43:09.170311    8522 start.go:729] Will try again in 5 seconds ...
	I0920 10:43:14.172070    8522 start.go:360] acquireMachinesLock for multinode-101000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:43:14.172473    8522 start.go:364] duration metric: took 311.125µs to acquireMachinesLock for "multinode-101000"
	I0920 10:43:14.172606    8522 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:43:14.172624    8522 fix.go:54] fixHost starting: 
	I0920 10:43:14.173361    8522 fix.go:112] recreateIfNeeded on multinode-101000: state=Stopped err=<nil>
	W0920 10:43:14.173388    8522 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:43:14.176791    8522 out.go:177] * Restarting existing qemu2 VM for "multinode-101000" ...
	I0920 10:43:14.184841    8522 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:43:14.185072    8522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:cd:b0:46:6e:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/multinode-101000/disk.qcow2
	I0920 10:43:14.194557    8522 main.go:141] libmachine: STDOUT: 
	I0920 10:43:14.194615    8522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:43:14.194685    8522 fix.go:56] duration metric: took 22.06325ms for fixHost
	I0920 10:43:14.194701    8522 start.go:83] releasing machines lock for "multinode-101000", held for 22.203167ms
	W0920 10:43:14.194842    8522 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-101000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-101000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:43:14.201799    8522 out.go:201] 
	W0920 10:43:14.204775    8522 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:43:14.204809    8522 out.go:270] * 
	* 
	W0920 10:43:14.207473    8522 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:43:14.215734    8522 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-101000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (69.773125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-101000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-101000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-101000-m01 --driver=qemu2 : exit status 80 (9.892900416s)

                                                
                                                
-- stdout --
	* [multinode-101000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-101000-m01" primary control-plane node in "multinode-101000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-101000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-101000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-101000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-101000-m02 --driver=qemu2 : exit status 80 (9.937233833s)

                                                
                                                
-- stdout --
	* [multinode-101000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-101000-m02" primary control-plane node in "multinode-101000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-101000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-101000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-101000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-101000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-101000: exit status 83 (80.346459ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-101000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-101000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-101000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-101000 -n multinode-101000: exit status 7 (31.316167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.06s)

                                                
                                    
x
+
TestPreload (10.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-153000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-153000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.856490083s)

                                                
                                                
-- stdout --
	* [test-preload-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-153000" primary control-plane node in "test-preload-153000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-153000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:43:34.498413    8580 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:43:34.498535    8580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.498538    8580 out.go:358] Setting ErrFile to fd 2...
	I0920 10:43:34.498540    8580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:43:34.498683    8580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:43:34.499767    8580 out.go:352] Setting JSON to false
	I0920 10:43:34.515953    8580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6185,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:43:34.516039    8580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:43:34.523200    8580 out.go:177] * [test-preload-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:43:34.530275    8580 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:43:34.530318    8580 notify.go:220] Checking for updates...
	I0920 10:43:34.539143    8580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:43:34.542256    8580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:43:34.546111    8580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:43:34.549186    8580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:43:34.552188    8580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:43:34.555488    8580 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:43:34.555539    8580 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:43:34.560116    8580 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:43:34.567223    8580 start.go:297] selected driver: qemu2
	I0920 10:43:34.567229    8580 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:43:34.567238    8580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:43:34.569602    8580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:43:34.574105    8580 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:43:34.577265    8580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:43:34.577285    8580 cni.go:84] Creating CNI manager for ""
	I0920 10:43:34.577306    8580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:43:34.577311    8580 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:43:34.577341    8580 start.go:340] cluster config:
	{Name:test-preload-153000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:43:34.581117    8580 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:34.590165    8580 out.go:177] * Starting "test-preload-153000" primary control-plane node in "test-preload-153000" cluster
	I0920 10:43:34.593024    8580 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0920 10:43:34.593086    8580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/test-preload-153000/config.json ...
	I0920 10:43:34.593103    8580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/test-preload-153000/config.json: {Name:mka56f012b8c075d2a88e326ee0d4e80654eb510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:43:34.593105    8580 cache.go:107] acquiring lock: {Name:mk68c05f40ad97233a07e049f52f8b9752387135 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:34.593112    8580 cache.go:107] acquiring lock: {Name:mkff25e85758adb3ffaca2246736e95688f3ee7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:34.593126    8580 cache.go:107] acquiring lock: {Name:mkb2a8d1ef19164b680c83fab694b4852881c07f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:34.593316    8580 cache.go:107] acquiring lock: {Name:mk34770eb77eda60227eea183d2ca4e231b850b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:34.593352    8580 cache.go:107] acquiring lock: {Name:mk8646af3b2db1c00e79216ed7fc26b3795dd10d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:34.593384    8580 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 10:43:34.593398    8580 cache.go:107] acquiring lock: {Name:mk89bb9f390755a151d5822794060f412fe48722 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:34.593400    8580 start.go:360] acquireMachinesLock for test-preload-153000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:43:34.593345    8580 cache.go:107] acquiring lock: {Name:mk3193c5d213a17afc756d2a7e97c28a3bf58221 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:34.593449    8580 start.go:364] duration metric: took 34.958µs to acquireMachinesLock for "test-preload-153000"
	I0920 10:43:34.593451    8580 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:43:34.593465    8580 start.go:93] Provisioning new machine with config: &{Name:test-preload-153000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:43:34.593499    8580 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:43:34.593585    8580 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 10:43:34.593625    8580 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:43:34.593705    8580 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:43:34.593720    8580 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 10:43:34.593416    8580 cache.go:107] acquiring lock: {Name:mk16e5bcbf24675a3085786b93f5d7fd1f99fc8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:43:34.593758    8580 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:43:34.594179    8580 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 10:43:34.598174    8580 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:43:34.607013    8580 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0920 10:43:34.607293    8580 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0920 10:43:34.607458    8580 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:43:34.607657    8580 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:43:34.608286    8580 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0920 10:43:34.608438    8580 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:43:34.608465    8580 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:43:34.608522    8580 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0920 10:43:34.616766    8580 start.go:159] libmachine.API.Create for "test-preload-153000" (driver="qemu2")
	I0920 10:43:34.616785    8580 client.go:168] LocalClient.Create starting
	I0920 10:43:34.616877    8580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:43:34.616909    8580 main.go:141] libmachine: Decoding PEM data...
	I0920 10:43:34.616919    8580 main.go:141] libmachine: Parsing certificate...
	I0920 10:43:34.616957    8580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:43:34.616982    8580 main.go:141] libmachine: Decoding PEM data...
	I0920 10:43:34.616992    8580 main.go:141] libmachine: Parsing certificate...
	I0920 10:43:34.617367    8580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:43:34.782920    8580 main.go:141] libmachine: Creating SSH key...
	I0920 10:43:34.859679    8580 main.go:141] libmachine: Creating Disk image...
	I0920 10:43:34.859701    8580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:43:34.859929    8580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2
	I0920 10:43:34.870172    8580 main.go:141] libmachine: STDOUT: 
	I0920 10:43:34.870190    8580 main.go:141] libmachine: STDERR: 
	I0920 10:43:34.870244    8580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2 +20000M
	I0920 10:43:34.879020    8580 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:43:34.879042    8580 main.go:141] libmachine: STDERR: 
	I0920 10:43:34.879073    8580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2
	I0920 10:43:34.879077    8580 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:43:34.879093    8580 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:43:34.879119    8580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:f0:e8:45:08:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2
	I0920 10:43:34.881007    8580 main.go:141] libmachine: STDOUT: 
	I0920 10:43:34.881024    8580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:43:34.881043    8580 client.go:171] duration metric: took 264.253417ms to LocalClient.Create
	I0920 10:43:35.003229    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0920 10:43:35.015505    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0920 10:43:35.022519    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:43:35.025410    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:43:35.051291    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0920 10:43:35.124149    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0920 10:43:35.125411    8580 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:43:35.125468    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:43:35.199761    8580 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0920 10:43:35.199811    8580 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 606.521458ms
	I0920 10:43:35.199862    8580 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0920 10:43:35.670092    8580 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:43:35.670192    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:43:36.155454    8580 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0920 10:43:36.155525    8580 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.562422584s
	I0920 10:43:36.155559    8580 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0920 10:43:36.739043    8580 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0920 10:43:36.739087    8580 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.145856583s
	I0920 10:43:36.739110    8580 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0920 10:43:36.881266    8580 start.go:128] duration metric: took 2.287759s to createHost
	I0920 10:43:36.881316    8580 start.go:83] releasing machines lock for "test-preload-153000", held for 2.287862333s
	W0920 10:43:36.881375    8580 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:43:36.895788    8580 out.go:177] * Deleting "test-preload-153000" in qemu2 ...
	W0920 10:43:36.932590    8580 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:43:36.932614    8580 start.go:729] Will try again in 5 seconds ...
	I0920 10:43:38.613375    8580 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0920 10:43:38.613425    8580 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.020081417s
	I0920 10:43:38.613453    8580 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0920 10:43:39.433704    8580 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0920 10:43:39.433753    8580 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.840455s
	I0920 10:43:39.433816    8580 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0920 10:43:39.983751    8580 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0920 10:43:39.983799    8580 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.390416542s
	I0920 10:43:39.983821    8580 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0920 10:43:40.524257    8580 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0920 10:43:40.524318    8580 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.931228459s
	I0920 10:43:40.524363    8580 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0920 10:43:41.932738    8580 start.go:360] acquireMachinesLock for test-preload-153000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:43:41.933175    8580 start.go:364] duration metric: took 362.208µs to acquireMachinesLock for "test-preload-153000"
	I0920 10:43:41.933304    8580 start.go:93] Provisioning new machine with config: &{Name:test-preload-153000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:43:41.933539    8580 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:43:41.954276    8580 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:43:42.005146    8580 start.go:159] libmachine.API.Create for "test-preload-153000" (driver="qemu2")
	I0920 10:43:42.005237    8580 client.go:168] LocalClient.Create starting
	I0920 10:43:42.005393    8580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:43:42.005461    8580 main.go:141] libmachine: Decoding PEM data...
	I0920 10:43:42.005482    8580 main.go:141] libmachine: Parsing certificate...
	I0920 10:43:42.005551    8580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:43:42.005596    8580 main.go:141] libmachine: Decoding PEM data...
	I0920 10:43:42.005613    8580 main.go:141] libmachine: Parsing certificate...
	I0920 10:43:42.006116    8580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:43:42.181142    8580 main.go:141] libmachine: Creating SSH key...
	I0920 10:43:42.257445    8580 main.go:141] libmachine: Creating Disk image...
	I0920 10:43:42.257451    8580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:43:42.257648    8580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2
	I0920 10:43:42.267056    8580 main.go:141] libmachine: STDOUT: 
	I0920 10:43:42.267072    8580 main.go:141] libmachine: STDERR: 
	I0920 10:43:42.267134    8580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2 +20000M
	I0920 10:43:42.274986    8580 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:43:42.275003    8580 main.go:141] libmachine: STDERR: 
	I0920 10:43:42.275014    8580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2
	I0920 10:43:42.275019    8580 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:43:42.275027    8580 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:43:42.275070    8580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:97:47:ad:2c:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/test-preload-153000/disk.qcow2
	I0920 10:43:42.276703    8580 main.go:141] libmachine: STDOUT: 
	I0920 10:43:42.276717    8580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:43:42.276731    8580 client.go:171] duration metric: took 271.466375ms to LocalClient.Create
	I0920 10:43:43.514400    8580 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0920 10:43:43.514466    8580 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.921380042s
	I0920 10:43:43.514490    8580 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0920 10:43:43.514529    8580 cache.go:87] Successfully saved all images to host disk.
	I0920 10:43:44.278908    8580 start.go:128] duration metric: took 2.345343583s to createHost
	I0920 10:43:44.278947    8580 start.go:83] releasing machines lock for "test-preload-153000", held for 2.345749875s
	W0920 10:43:44.279198    8580 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-153000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-153000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:43:44.293829    8580 out.go:201] 
	W0920 10:43:44.297841    8580 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:43:44.297886    8580 out.go:270] * 
	* 
	W0920 10:43:44.300238    8580 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:43:44.309758    8580 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-153000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-20 10:43:44.329069 -0700 PDT m=+700.184997584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-153000 -n test-preload-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-153000 -n test-preload-153000: exit status 7 (69.867041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-153000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-153000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-153000
--- FAIL: TestPreload (10.01s)

                                                
                                    
x
+
TestScheduledStopUnix (10.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-928000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-928000 --memory=2048 --driver=qemu2 : exit status 80 (9.972430375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-928000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-928000" primary control-plane node in "scheduled-stop-928000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-928000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-928000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-928000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-928000" primary control-plane node in "scheduled-stop-928000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-928000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-928000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-20 10:43:54.454107 -0700 PDT m=+710.310072834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-928000 -n scheduled-stop-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-928000 -n scheduled-stop-928000: exit status 7 (70.404667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-928000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-928000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-928000
--- FAIL: TestScheduledStopUnix (10.13s)

                                                
                                    
x
+
TestSkaffold (12.32s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3464494038 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3464494038 version: (1.060674791s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-605000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-605000 --memory=2600 --driver=qemu2 : exit status 80 (9.993220125s)

                                                
                                                
-- stdout --
	* [skaffold-605000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-605000" primary control-plane node in "skaffold-605000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-605000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-605000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-605000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-605000" primary control-plane node in "skaffold-605000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-605000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-605000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-20 10:44:06.77653 -0700 PDT m=+722.632541126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-605000 -n skaffold-605000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-605000 -n skaffold-605000: exit status 7 (63.772292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-605000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-605000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-605000
--- FAIL: TestSkaffold (12.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (589.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2127276111 start -p running-upgrade-097000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2127276111 start -p running-upgrade-097000 --memory=2200 --vm-driver=qemu2 : (51.366275875s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-097000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-097000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.19987175s)

                                                
                                                
-- stdout --
	* [running-upgrade-097000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-097000" primary control-plane node in "running-upgrade-097000" cluster
	* Updating the running qemu2 "running-upgrade-097000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:45:40.684722    8964 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:45:40.684876    8964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:45:40.684880    8964 out.go:358] Setting ErrFile to fd 2...
	I0920 10:45:40.684882    8964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:45:40.685026    8964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:45:40.686154    8964 out.go:352] Setting JSON to false
	I0920 10:45:40.702379    8964 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6311,"bootTime":1726848029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:45:40.702441    8964 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:45:40.707544    8964 out.go:177] * [running-upgrade-097000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:45:40.715590    8964 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:45:40.715661    8964 notify.go:220] Checking for updates...
	I0920 10:45:40.722568    8964 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:45:40.726539    8964 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:45:40.729551    8964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:45:40.739541    8964 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:45:40.746532    8964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:45:40.749816    8964 config.go:182] Loaded profile config "running-upgrade-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:45:40.752534    8964 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:45:40.753567    8964 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:45:40.757541    8964 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:45:40.764376    8964 start.go:297] selected driver: qemu2
	I0920 10:45:40.764380    8964 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:45:40.764420    8964 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:45:40.766839    8964 cni.go:84] Creating CNI manager for ""
	I0920 10:45:40.766874    8964 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:45:40.766901    8964 start.go:340] cluster config:
	{Name:running-upgrade-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:45:40.766957    8964 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:45:40.774565    8964 out.go:177] * Starting "running-upgrade-097000" primary control-plane node in "running-upgrade-097000" cluster
	I0920 10:45:40.778552    8964 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:45:40.778570    8964 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0920 10:45:40.778574    8964 cache.go:56] Caching tarball of preloaded images
	I0920 10:45:40.778645    8964 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:45:40.778651    8964 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0920 10:45:40.778701    8964 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/config.json ...
	I0920 10:45:40.779056    8964 start.go:360] acquireMachinesLock for running-upgrade-097000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:45:40.779092    8964 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "running-upgrade-097000"
	I0920 10:45:40.779102    8964 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:45:40.779107    8964 fix.go:54] fixHost starting: 
	I0920 10:45:40.779738    8964 fix.go:112] recreateIfNeeded on running-upgrade-097000: state=Running err=<nil>
	W0920 10:45:40.779746    8964 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:45:40.781827    8964 out.go:177] * Updating the running qemu2 "running-upgrade-097000" VM ...
	I0920 10:45:40.789560    8964 machine.go:93] provisionDockerMachine start ...
	I0920 10:45:40.789599    8964 main.go:141] libmachine: Using SSH client type: native
	I0920 10:45:40.789699    8964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100995c00] 0x100998440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:45:40.789704    8964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 10:45:40.856990    8964 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-097000
	
	I0920 10:45:40.857006    8964 buildroot.go:166] provisioning hostname "running-upgrade-097000"
	I0920 10:45:40.857072    8964 main.go:141] libmachine: Using SSH client type: native
	I0920 10:45:40.857177    8964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100995c00] 0x100998440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:45:40.857184    8964 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-097000 && echo "running-upgrade-097000" | sudo tee /etc/hostname
	I0920 10:45:40.930297    8964 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-097000
	
	I0920 10:45:40.930359    8964 main.go:141] libmachine: Using SSH client type: native
	I0920 10:45:40.930478    8964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100995c00] 0x100998440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:45:40.930489    8964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-097000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-097000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-097000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:45:40.998142    8964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:45:40.998162    8964 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19679-6783/.minikube CaCertPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19679-6783/.minikube}
	I0920 10:45:40.998174    8964 buildroot.go:174] setting up certificates
	I0920 10:45:40.998178    8964 provision.go:84] configureAuth start
	I0920 10:45:40.998182    8964 provision.go:143] copyHostCerts
	I0920 10:45:40.998264    8964 exec_runner.go:144] found /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.pem, removing ...
	I0920 10:45:40.998274    8964 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.pem
	I0920 10:45:40.998405    8964 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.pem (1078 bytes)
	I0920 10:45:40.998584    8964 exec_runner.go:144] found /Users/jenkins/minikube-integration/19679-6783/.minikube/cert.pem, removing ...
	I0920 10:45:40.998588    8964 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19679-6783/.minikube/cert.pem
	I0920 10:45:40.998638    8964 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19679-6783/.minikube/cert.pem (1123 bytes)
	I0920 10:45:40.998743    8964 exec_runner.go:144] found /Users/jenkins/minikube-integration/19679-6783/.minikube/key.pem, removing ...
	I0920 10:45:40.998747    8964 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19679-6783/.minikube/key.pem
	I0920 10:45:40.998817    8964 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19679-6783/.minikube/key.pem (1675 bytes)
	I0920 10:45:40.998912    8964 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-097000 san=[127.0.0.1 localhost minikube running-upgrade-097000]
	I0920 10:45:41.117241    8964 provision.go:177] copyRemoteCerts
	I0920 10:45:41.117292    8964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:45:41.117301    8964 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/running-upgrade-097000/id_rsa Username:docker}
	I0920 10:45:41.154317    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 10:45:41.161884    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 10:45:41.169315    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 10:45:41.175843    8964 provision.go:87] duration metric: took 177.657417ms to configureAuth
	I0920 10:45:41.175852    8964 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:45:41.175971    8964 config.go:182] Loaded profile config "running-upgrade-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:45:41.176007    8964 main.go:141] libmachine: Using SSH client type: native
	I0920 10:45:41.176097    8964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100995c00] 0x100998440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:45:41.176106    8964 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:45:41.243651    8964 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:45:41.243660    8964 buildroot.go:70] root file system type: tmpfs
	I0920 10:45:41.243707    8964 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:45:41.243775    8964 main.go:141] libmachine: Using SSH client type: native
	I0920 10:45:41.243889    8964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100995c00] 0x100998440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:45:41.243930    8964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:45:41.315310    8964 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:45:41.315374    8964 main.go:141] libmachine: Using SSH client type: native
	I0920 10:45:41.315493    8964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100995c00] 0x100998440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:45:41.315502    8964 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:45:41.385837    8964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:45:41.385846    8964 machine.go:96] duration metric: took 596.281875ms to provisionDockerMachine
	I0920 10:45:41.385851    8964 start.go:293] postStartSetup for "running-upgrade-097000" (driver="qemu2")
	I0920 10:45:41.385857    8964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:45:41.385915    8964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:45:41.385923    8964 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/running-upgrade-097000/id_rsa Username:docker}
	I0920 10:45:41.421557    8964 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:45:41.422856    8964 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 10:45:41.422863    8964 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19679-6783/.minikube/addons for local assets ...
	I0920 10:45:41.422948    8964 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19679-6783/.minikube/files for local assets ...
	I0920 10:45:41.423074    8964 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem -> 72792.pem in /etc/ssl/certs
	I0920 10:45:41.423199    8964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 10:45:41.426046    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem --> /etc/ssl/certs/72792.pem (1708 bytes)
	I0920 10:45:41.433155    8964 start.go:296] duration metric: took 47.298917ms for postStartSetup
	I0920 10:45:41.433167    8964 fix.go:56] duration metric: took 654.063875ms for fixHost
	I0920 10:45:41.433202    8964 main.go:141] libmachine: Using SSH client type: native
	I0920 10:45:41.433303    8964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100995c00] 0x100998440 <nil>  [] 0s} localhost 51261 <nil> <nil>}
	I0920 10:45:41.433312    8964 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 10:45:41.499580    8964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726854341.186335347
	
	I0920 10:45:41.499587    8964 fix.go:216] guest clock: 1726854341.186335347
	I0920 10:45:41.499591    8964 fix.go:229] Guest: 2024-09-20 10:45:41.186335347 -0700 PDT Remote: 2024-09-20 10:45:41.433169 -0700 PDT m=+0.768264959 (delta=-246.833653ms)
	I0920 10:45:41.499601    8964 fix.go:200] guest clock delta is within tolerance: -246.833653ms
	I0920 10:45:41.499604    8964 start.go:83] releasing machines lock for "running-upgrade-097000", held for 720.509417ms
	I0920 10:45:41.499668    8964 ssh_runner.go:195] Run: cat /version.json
	I0920 10:45:41.499683    8964 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/running-upgrade-097000/id_rsa Username:docker}
	I0920 10:45:41.499669    8964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:45:41.499723    8964 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/running-upgrade-097000/id_rsa Username:docker}
	W0920 10:45:41.500229    8964 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51261: connect: connection refused
	I0920 10:45:41.500247    8964 retry.go:31] will retry after 194.850841ms: dial tcp [::1]:51261: connect: connection refused
	W0920 10:45:41.736458    8964 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0920 10:45:41.736557    8964 ssh_runner.go:195] Run: systemctl --version
	I0920 10:45:41.738906    8964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:45:41.740944    8964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:45:41.740978    8964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 10:45:41.744438    8964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 10:45:41.749342    8964 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 10:45:41.749352    8964 start.go:495] detecting cgroup driver to use...
	I0920 10:45:41.749477    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:45:41.755478    8964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0920 10:45:41.758482    8964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:45:41.761640    8964 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:45:41.761671    8964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:45:41.764443    8964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:45:41.767313    8964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:45:41.770329    8964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:45:41.773072    8964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:45:41.776005    8964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:45:41.778824    8964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 10:45:41.781961    8964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 10:45:41.785488    8964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:45:41.788300    8964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:45:41.790758    8964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:45:41.876302    8964 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:45:41.882729    8964 start.go:495] detecting cgroup driver to use...
	I0920 10:45:41.882808    8964 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:45:41.892406    8964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:45:41.897719    8964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:45:41.905060    8964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:45:41.909858    8964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:45:41.915076    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:45:41.920627    8964 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:45:41.921719    8964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:45:41.924712    8964 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 10:45:41.930041    8964 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:45:42.029804    8964 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:45:42.123367    8964 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:45:42.123430    8964 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:45:42.128717    8964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:45:42.222669    8964 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:45:45.710076    8964 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.487403625s)
	I0920 10:45:45.710161    8964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 10:45:45.715103    8964 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 10:45:45.721271    8964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:45:45.725729    8964 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:45:45.806486    8964 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:45:45.893038    8964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:45:45.948059    8964 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:45:45.953944    8964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:45:45.958673    8964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:45:46.019636    8964 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 10:45:46.058147    8964 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:45:46.058248    8964 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:45:46.061142    8964 start.go:563] Will wait 60s for crictl version
	I0920 10:45:46.061203    8964 ssh_runner.go:195] Run: which crictl
	I0920 10:45:46.062587    8964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:45:46.074073    8964 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0920 10:45:46.074157    8964 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:45:46.086517    8964 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:45:46.103762    8964 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0920 10:45:46.103907    8964 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0920 10:45:46.105331    8964 kubeadm.go:883] updating cluster {Name:running-upgrade-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0920 10:45:46.105380    8964 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:45:46.105452    8964 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:45:46.116446    8964 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:45:46.116454    8964 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:45:46.116509    8964 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:45:46.119425    8964 ssh_runner.go:195] Run: which lz4
	I0920 10:45:46.120706    8964 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 10:45:46.121929    8964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 10:45:46.121940    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0920 10:45:47.117536    8964 docker.go:649] duration metric: took 996.875ms to copy over tarball
	I0920 10:45:47.117599    8964 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 10:45:48.220616    8964 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.103005166s)
	I0920 10:45:48.220630    8964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 10:45:48.236714    8964 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:45:48.239583    8964 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0920 10:45:48.244492    8964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:45:48.300797    8964 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:45:49.477020    8964 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.17621175s)
	I0920 10:45:49.477132    8964 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:45:49.494430    8964 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:45:49.494439    8964 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:45:49.494445    8964 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 10:45:49.498527    8964 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:45:49.500483    8964 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:45:49.503082    8964 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:45:49.502986    8964 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:45:49.503229    8964 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:45:49.505409    8964 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:45:49.505421    8964 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:45:49.507286    8964 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:45:49.507355    8964 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:45:49.507387    8964 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:45:49.508640    8964 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:45:49.508637    8964 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:45:49.509634    8964 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:45:49.509684    8964 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:45:49.510553    8964 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:45:49.511598    8964 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:45:49.914396    8964 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:45:49.929152    8964 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0920 10:45:49.929186    8964 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:45:49.929264    8964 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:45:49.931898    8964 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:45:49.935009    8964 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:45:49.943809    8964 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0920 10:45:49.949124    8964 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 10:45:49.949136    8964 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 10:45:49.950743    8964 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0920 10:45:49.950762    8964 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:45:49.950797    8964 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:45:49.955692    8964 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0920 10:45:49.955714    8964 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:45:49.955796    8964 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:45:49.971432    8964 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0920 10:45:49.971453    8964 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:45:49.971521    8964 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0920 10:45:49.971651    8964 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0920 10:45:49.971661    8964 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0920 10:45:49.971675    8964 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0920 10:45:49.971689    8964 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0920 10:45:49.978914    8964 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0920 10:45:49.993035    8964 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:45:49.993185    8964 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:45:49.993396    8964 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:45:49.993461    8964 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 10:45:49.994991    8964 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0920 10:45:49.995000    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0920 10:45:49.995005    8964 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0920 10:45:49.995014    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0920 10:45:50.003387    8964 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:45:50.003544    8964 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:45:50.006157    8964 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 10:45:50.006166    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0920 10:45:50.060160    8964 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:45:50.108199    8964 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0920 10:45:50.108199    8964 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0920 10:45:50.108215    8964 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0920 10:45:50.108224    8964 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:45:50.108225    8964 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:45:50.108288    8964 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:45:50.108288    8964 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:45:50.132242    8964 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:45:50.132380    8964 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:45:50.143198    8964 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0920 10:45:50.149887    8964 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0920 10:45:50.149912    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0920 10:45:50.237883    8964 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:45:50.237899    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0920 10:45:50.341113    8964 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0920 10:45:50.344634    8964 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:45:50.344644    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0920 10:45:50.434849    8964 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:45:50.434973    8964 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:45:50.519565    8964 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0920 10:45:50.519580    8964 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0920 10:45:50.519601    8964 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:45:50.519665    8964 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:45:51.277353    8964 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:45:51.277695    8964 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:45:51.282696    8964 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0920 10:45:51.282727    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0920 10:45:51.345266    8964 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:45:51.345281    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0920 10:45:51.582445    8964 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 10:45:51.582487    8964 cache_images.go:92] duration metric: took 2.088043541s to LoadCachedImages
	W0920 10:45:51.582533    8964 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0920 10:45:51.582546    8964 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0920 10:45:51.582595    8964 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-097000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 10:45:51.582669    8964 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:45:51.595316    8964 cni.go:84] Creating CNI manager for ""
	I0920 10:45:51.595334    8964 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:45:51.595340    8964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 10:45:51.595348    8964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-097000 NodeName:running-upgrade-097000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:45:51.595415    8964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-097000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:45:51.595483    8964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0920 10:45:51.598207    8964 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:45:51.598240    8964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:45:51.600826    8964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0920 10:45:51.605840    8964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:45:51.610782    8964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0920 10:45:51.616004    8964 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0920 10:45:51.617308    8964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:45:51.696492    8964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:45:51.702068    8964 certs.go:68] Setting up /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000 for IP: 10.0.2.15
	I0920 10:45:51.702089    8964 certs.go:194] generating shared ca certs ...
	I0920 10:45:51.702098    8964 certs.go:226] acquiring lock for ca certs: {Name:mk223deb0e7531c2ef743391b3102022988e9e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:45:51.702315    8964 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.key
	I0920 10:45:51.702353    8964 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/proxy-client-ca.key
	I0920 10:45:51.702358    8964 certs.go:256] generating profile certs ...
	I0920 10:45:51.702418    8964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/client.key
	I0920 10:45:51.702435    8964 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.key.20db06dc
	I0920 10:45:51.702449    8964 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.crt.20db06dc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0920 10:45:51.852692    8964 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.crt.20db06dc ...
	I0920 10:45:51.852702    8964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.crt.20db06dc: {Name:mk5d9ab905264c6943f6edd77aad0260e4b47fe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:45:51.852993    8964 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.key.20db06dc ...
	I0920 10:45:51.852997    8964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.key.20db06dc: {Name:mkebf0af641e28fd120d806c46b1c701921feca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:45:51.853142    8964 certs.go:381] copying /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.crt.20db06dc -> /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.crt
	I0920 10:45:51.853275    8964 certs.go:385] copying /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.key.20db06dc -> /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.key
	I0920 10:45:51.853420    8964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/proxy-client.key
	I0920 10:45:51.853552    8964 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/7279.pem (1338 bytes)
	W0920 10:45:51.853574    8964 certs.go:480] ignoring /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/7279_empty.pem, impossibly tiny 0 bytes
	I0920 10:45:51.853579    8964 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 10:45:51.853603    8964 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem (1078 bytes)
	I0920 10:45:51.853625    8964 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:45:51.853642    8964 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/key.pem (1675 bytes)
	I0920 10:45:51.853686    8964 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem (1708 bytes)
	I0920 10:45:51.854069    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:45:51.861759    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 10:45:51.868665    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:45:51.875467    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 10:45:51.882912    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 10:45:51.890290    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 10:45:51.897311    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:45:51.904569    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 10:45:51.911476    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/7279.pem --> /usr/share/ca-certificates/7279.pem (1338 bytes)
	I0920 10:45:51.917848    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem --> /usr/share/ca-certificates/72792.pem (1708 bytes)
	I0920 10:45:51.925282    8964 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:45:51.932202    8964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:45:51.937211    8964 ssh_runner.go:195] Run: openssl version
	I0920 10:45:51.939003    8964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7279.pem && ln -fs /usr/share/ca-certificates/7279.pem /etc/ssl/certs/7279.pem"
	I0920 10:45:51.941819    8964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7279.pem
	I0920 10:45:51.943235    8964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:32 /usr/share/ca-certificates/7279.pem
	I0920 10:45:51.943256    8964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7279.pem
	I0920 10:45:51.945155    8964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7279.pem /etc/ssl/certs/51391683.0"
	I0920 10:45:51.948076    8964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72792.pem && ln -fs /usr/share/ca-certificates/72792.pem /etc/ssl/certs/72792.pem"
	I0920 10:45:51.951259    8964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72792.pem
	I0920 10:45:51.952700    8964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:32 /usr/share/ca-certificates/72792.pem
	I0920 10:45:51.952720    8964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72792.pem
	I0920 10:45:51.954591    8964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72792.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:45:51.957140    8964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:45:51.960352    8964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:45:51.961825    8964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:45:51.961847    8964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:45:51.963668    8964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:45:51.966594    8964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 10:45:51.968163    8964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:45:51.969962    8964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:45:51.971610    8964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:45:51.973296    8964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:45:51.975199    8964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:45:51.977023    8964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:45:51.978787    8964 kubeadm.go:392] StartCluster: {Name:running-upgrade-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:45:51.978858    8964 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:45:51.999376    8964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:45:52.002624    8964 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 10:45:52.002634    8964 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 10:45:52.002664    8964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:45:52.005649    8964 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:45:52.005685    8964 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-097000" does not appear in /Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:45:52.005700    8964 kubeconfig.go:62] /Users/jenkins/minikube-integration/19679-6783/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-097000" cluster setting kubeconfig missing "running-upgrade-097000" context setting]
	I0920 10:45:52.005856    8964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/kubeconfig: {Name:mkc202c0538e947b3e0d9844748996d0c112bf36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:45:52.006762    8964 kapi.go:59] client config for running-upgrade-097000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/client.key", CAFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f6e030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:45:52.007690    8964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:45:52.010469    8964 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-097000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0920 10:45:52.010477    8964 kubeadm.go:1160] stopping kube-system containers ...
	I0920 10:45:52.010529    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:45:52.021097    8964 docker.go:483] Stopping containers: [3952ed3bd693 1a9e40be7db6 be88f3976675 dca9f7a0c338 d901a9e09564 46c3297cbb93 d62a60667e06 21241ebf186a c6218011a3d3 5f1cc3744590 52129059d0d7 8908fd974b5c 6a17228953af]
	I0920 10:45:52.021173    8964 ssh_runner.go:195] Run: docker stop 3952ed3bd693 1a9e40be7db6 be88f3976675 dca9f7a0c338 d901a9e09564 46c3297cbb93 d62a60667e06 21241ebf186a c6218011a3d3 5f1cc3744590 52129059d0d7 8908fd974b5c 6a17228953af
	I0920 10:45:52.037651    8964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:45:52.141036    8964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:45:52.145088    8964 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Sep 20 17:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep 20 17:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 20 17:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 20 17:45 /etc/kubernetes/scheduler.conf
	
	I0920 10:45:52.145128    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf
	I0920 10:45:52.148367    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:45:52.148404    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:45:52.151343    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf
	I0920 10:45:52.154578    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:45:52.154608    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:45:52.157848    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf
	I0920 10:45:52.160651    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:45:52.160675    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:45:52.163304    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf
	I0920 10:45:52.166093    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:45:52.166120    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:45:52.169382    8964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:45:52.172511    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:45:52.250888    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:45:52.853508    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:45:53.044619    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:45:53.065338    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:45:53.090208    8964 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:45:53.090300    8964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:45:53.591063    8964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:45:54.092356    8964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:45:54.096872    8964 api_server.go:72] duration metric: took 1.006669042s to wait for apiserver process to appear ...
	I0920 10:45:54.096882    8964 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:45:54.096892    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:45:59.098991    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:45:59.099028    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:04.099687    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:04.099743    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:09.100417    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:09.100494    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:14.101472    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:14.101558    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:19.102951    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:19.103054    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:24.104982    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:24.105072    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:29.107555    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:29.107649    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:34.110360    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:34.110451    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:39.113180    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:39.113283    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:44.115994    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:44.116085    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:49.118764    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:49.118859    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:46:54.121694    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:46:54.122202    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:46:54.166730    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:46:54.166883    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:46:54.186356    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:46:54.186461    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:46:54.199861    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:46:54.199959    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:46:54.215965    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:46:54.216051    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:46:54.226344    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:46:54.226421    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:46:54.236510    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:46:54.236584    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:46:54.247130    8964 logs.go:276] 0 containers: []
	W0920 10:46:54.247141    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:46:54.247214    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:46:54.257053    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:46:54.257070    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:46:54.257077    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:46:54.269226    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:46:54.269236    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:46:54.280920    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:46:54.280936    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:46:54.353753    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:46:54.353765    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:46:54.379843    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:46:54.379853    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:46:54.392324    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:46:54.392336    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:46:54.404626    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:46:54.404636    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:46:54.421831    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:46:54.421841    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:46:54.439267    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:46:54.439279    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:46:54.477059    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:46:54.477066    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:46:54.491178    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:46:54.491189    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:46:54.504751    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:46:54.504761    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:46:54.516477    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:46:54.516488    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:46:54.532235    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:46:54.532247    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:46:54.537108    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:46:54.537114    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:46:54.552568    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:46:54.552579    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:46:54.564580    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:46:54.564592    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:46:57.091657    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:47:02.094113    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:47:02.094650    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:47:02.135471    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:47:02.135632    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:47:02.157444    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:47:02.157580    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:47:02.172962    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:47:02.173063    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:47:02.185410    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:47:02.185497    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:47:02.196370    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:47:02.196456    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:47:02.206574    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:47:02.206670    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:47:02.216763    8964 logs.go:276] 0 containers: []
	W0920 10:47:02.216776    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:47:02.216843    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:47:02.227824    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:47:02.227841    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:47:02.227847    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:47:02.242417    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:47:02.242430    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:47:02.267414    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:47:02.267422    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:47:02.305698    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:47:02.305707    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:47:02.310078    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:47:02.310085    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:47:02.323927    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:47:02.323938    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:47:02.349740    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:47:02.349754    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:47:02.363337    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:47:02.363349    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:47:02.374622    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:47:02.374637    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:47:02.409934    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:47:02.409946    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:47:02.427062    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:47:02.427073    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:47:02.438587    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:47:02.438597    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:47:02.453237    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:47:02.453248    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:47:02.468526    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:47:02.468534    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:47:02.480212    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:47:02.480224    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:47:02.492084    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:47:02.492095    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:47:02.509082    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:47:02.509093    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:47:05.025078    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:47:10.027989    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:47:10.028483    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:47:10.068051    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:47:10.068213    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:47:10.090249    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:47:10.090421    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:47:10.106674    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:47:10.106774    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:47:10.119688    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:47:10.119770    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:47:10.130419    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:47:10.130504    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:47:10.141285    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:47:10.141366    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:47:10.151810    8964 logs.go:276] 0 containers: []
	W0920 10:47:10.151823    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:47:10.151893    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:47:10.162384    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:47:10.162405    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:47:10.162411    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:47:10.176903    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:47:10.176913    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:47:10.194881    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:47:10.194891    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:47:10.207580    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:47:10.207590    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:47:10.219346    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:47:10.219356    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:47:10.243922    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:47:10.243930    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:47:10.281520    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:47:10.281527    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:47:10.319508    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:47:10.319518    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:47:10.339250    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:47:10.339262    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:47:10.343466    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:47:10.343473    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:47:10.355017    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:47:10.355027    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:47:10.366474    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:47:10.366490    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:47:10.380526    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:47:10.380535    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:47:10.395990    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:47:10.395999    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:47:10.411771    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:47:10.411783    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:47:10.423457    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:47:10.423467    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:47:10.435147    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:47:10.435157    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:47:12.962686    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:47:17.965018    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:47:17.965238    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:47:17.976583    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:47:17.976664    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:47:17.987185    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:47:17.987273    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:47:17.997955    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:47:17.998028    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:47:18.008467    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:47:18.008556    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:47:18.018873    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:47:18.018958    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:47:18.029467    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:47:18.029549    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:47:18.043534    8964 logs.go:276] 0 containers: []
	W0920 10:47:18.043546    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:47:18.043616    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:47:18.054395    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:47:18.054414    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:47:18.054422    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:47:18.070322    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:47:18.070332    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:47:18.086217    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:47:18.086227    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:47:18.097273    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:47:18.097285    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:47:18.109152    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:47:18.109162    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:47:18.121174    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:47:18.121188    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:47:18.132927    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:47:18.132935    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:47:18.166998    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:47:18.167013    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:47:18.182590    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:47:18.182599    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:47:18.207010    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:47:18.207026    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:47:18.220644    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:47:18.220655    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:47:18.245666    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:47:18.245673    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:47:18.283447    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:47:18.283456    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:47:18.298912    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:47:18.298923    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:47:18.314662    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:47:18.314675    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:47:18.335585    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:47:18.335595    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:47:18.339845    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:47:18.339852    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:47:20.855002    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:47:25.857832    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:47:25.858472    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:47:25.895478    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:47:25.895637    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:47:25.915902    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:47:25.916009    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:47:25.931545    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:47:25.931648    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:47:25.943605    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:47:25.943689    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:47:25.954683    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:47:25.954764    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:47:25.965070    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:47:25.965151    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:47:25.975093    8964 logs.go:276] 0 containers: []
	W0920 10:47:25.975104    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:47:25.975164    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:47:25.985895    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:47:25.985913    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:47:25.985919    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:47:25.990140    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:47:25.990149    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:47:26.015751    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:47:26.015763    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:47:26.027790    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:47:26.027800    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:47:26.039433    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:47:26.039442    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:47:26.051369    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:47:26.051379    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:47:26.069211    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:47:26.069220    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:47:26.086394    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:47:26.086402    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:47:26.111327    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:47:26.111337    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:47:26.149221    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:47:26.149229    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:47:26.164976    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:47:26.164986    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:47:26.185457    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:47:26.185468    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:47:26.199931    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:47:26.199940    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:47:26.216387    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:47:26.216396    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:47:26.227457    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:47:26.227467    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:47:26.261802    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:47:26.261813    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:47:26.273258    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:47:26.273269    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:47:28.786429    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:47:33.789182    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:47:33.789625    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:47:33.823232    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:47:33.823382    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:47:33.843229    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:47:33.843351    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:47:33.861689    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:47:33.861769    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:47:33.877932    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:47:33.878018    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:47:33.891603    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:47:33.891682    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:47:33.902105    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:47:33.902185    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:47:33.912490    8964 logs.go:276] 0 containers: []
	W0920 10:47:33.912504    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:47:33.912576    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:47:33.924001    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:47:33.924019    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:47:33.924024    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:47:33.940907    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:47:33.940916    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:47:33.952658    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:47:33.952669    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:47:33.964315    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:47:33.964326    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:47:33.968849    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:47:33.968858    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:47:33.980753    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:47:33.980766    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:47:33.992150    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:47:33.992161    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:47:34.028671    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:47:34.028680    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:47:34.043528    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:47:34.043537    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:47:34.060650    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:47:34.060660    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:47:34.072402    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:47:34.072417    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:47:34.098158    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:47:34.098167    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:47:34.111831    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:47:34.111841    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:47:34.123901    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:47:34.123910    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:47:34.135534    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:47:34.135546    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:47:34.161841    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:47:34.161849    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:47:34.201622    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:47:34.201632    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:47:36.730473    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:47:41.732916    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:47:41.733242    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:47:41.762071    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:47:41.762223    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:47:41.781834    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:47:41.781942    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:47:41.795491    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:47:41.795580    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:47:41.806913    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:47:41.806993    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:47:41.817186    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:47:41.817264    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:47:41.827992    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:47:41.828059    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:47:41.838689    8964 logs.go:276] 0 containers: []
	W0920 10:47:41.838702    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:47:41.838771    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:47:41.848933    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:47:41.848952    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:47:41.848957    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:47:41.887129    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:47:41.887139    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:47:41.921899    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:47:41.921911    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:47:41.941372    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:47:41.941384    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:47:41.952532    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:47:41.952544    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:47:41.976594    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:47:41.976601    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:47:41.988583    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:47:41.988592    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:47:41.993321    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:47:41.993330    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:47:42.007502    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:47:42.007512    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:47:42.018966    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:47:42.018974    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:47:42.034878    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:47:42.034887    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:47:42.046088    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:47:42.046097    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:47:42.059818    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:47:42.059830    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:47:42.083861    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:47:42.083869    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:47:42.097904    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:47:42.097915    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:47:42.109461    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:47:42.109473    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:47:42.121242    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:47:42.121252    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:47:44.639990    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:47:49.640819    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:47:49.641374    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:47:49.681983    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:47:49.682145    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:47:49.704046    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:47:49.704179    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:47:49.719878    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:47:49.719967    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:47:49.732427    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:47:49.732518    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:47:49.743718    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:47:49.743791    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:47:49.754325    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:47:49.754396    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:47:49.765624    8964 logs.go:276] 0 containers: []
	W0920 10:47:49.765637    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:47:49.765697    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:47:49.776804    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:47:49.776821    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:47:49.776827    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:47:49.791004    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:47:49.791013    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:47:49.816435    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:47:49.816450    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:47:49.828206    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:47:49.828220    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:47:49.833022    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:47:49.833032    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:47:49.846668    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:47:49.846679    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:47:49.860586    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:47:49.860595    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:47:49.877825    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:47:49.877836    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:47:49.890489    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:47:49.890501    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:47:49.906134    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:47:49.906149    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:47:49.917992    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:47:49.918001    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:47:49.929294    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:47:49.929308    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:47:49.953468    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:47:49.953474    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:47:49.991462    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:47:49.991469    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:47:50.029935    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:47:50.029945    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:47:50.047196    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:47:50.047206    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:47:50.058829    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:47:50.058842    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:47:52.572170    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:47:57.575142    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:47:57.575648    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:47:57.620954    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:47:57.621084    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:47:57.639647    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:47:57.639739    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:47:57.660062    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:47:57.660142    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:47:57.670235    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:47:57.670315    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:47:57.683127    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:47:57.683207    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:47:57.693696    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:47:57.693770    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:47:57.708412    8964 logs.go:276] 0 containers: []
	W0920 10:47:57.708428    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:47:57.708499    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:47:57.731401    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:47:57.731418    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:47:57.731424    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:47:57.735900    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:47:57.735906    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:47:57.746766    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:47:57.746776    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:47:57.758558    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:47:57.758567    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:47:57.798415    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:47:57.798423    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:47:57.812438    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:47:57.812450    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:47:57.823992    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:47:57.824003    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:47:57.835413    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:47:57.835423    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:47:57.852729    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:47:57.852739    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:47:57.878337    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:47:57.878343    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:47:57.893921    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:47:57.893931    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:47:57.905515    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:47:57.905531    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:47:57.916780    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:47:57.916789    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:47:57.951770    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:47:57.951786    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:47:57.979830    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:47:57.979840    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:47:57.993351    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:47:57.993361    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:47:58.007624    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:47:58.007635    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:48:00.521157    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:48:05.523515    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:48:05.524069    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:48:05.565595    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:48:05.565747    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:48:05.584855    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:48:05.584941    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:48:05.598560    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:48:05.598626    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:48:05.610330    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:48:05.610415    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:48:05.620830    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:48:05.620898    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:48:05.631606    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:48:05.631687    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:48:05.641944    8964 logs.go:276] 0 containers: []
	W0920 10:48:05.641954    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:48:05.642012    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:48:05.658984    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:48:05.659002    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:48:05.659007    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:48:05.697183    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:48:05.697191    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:48:05.711471    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:48:05.711483    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:48:05.722937    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:48:05.722948    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:48:05.733770    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:48:05.733781    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:48:05.745512    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:48:05.745525    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:48:05.769278    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:48:05.769287    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:48:05.780083    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:48:05.780094    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:48:05.792334    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:48:05.792344    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:48:05.803485    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:48:05.803500    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:48:05.807772    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:48:05.807781    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:48:05.824532    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:48:05.824542    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:48:05.860217    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:48:05.860232    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:48:05.873597    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:48:05.873608    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:48:05.887761    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:48:05.887770    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:48:05.903739    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:48:05.903754    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:48:05.915505    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:48:05.915519    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:48:08.442907    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:48:13.445463    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:48:13.446102    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:48:13.490757    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:48:13.490903    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:48:13.510515    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:48:13.510617    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:48:13.524178    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:48:13.524261    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:48:13.535536    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:48:13.535618    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:48:13.545878    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:48:13.545963    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:48:13.556852    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:48:13.556924    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:48:13.567423    8964 logs.go:276] 0 containers: []
	W0920 10:48:13.567433    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:48:13.567496    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:48:13.578000    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:48:13.578020    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:48:13.578026    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:48:13.618252    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:48:13.618260    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:48:13.622418    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:48:13.622423    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:48:13.636617    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:48:13.636626    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:48:13.650480    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:48:13.650491    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:48:13.661592    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:48:13.661604    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:48:13.696157    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:48:13.696170    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:48:13.709239    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:48:13.709255    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:48:13.730526    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:48:13.730536    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:48:13.741712    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:48:13.741728    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:48:13.753146    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:48:13.753161    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:48:13.777101    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:48:13.777107    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:48:13.789148    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:48:13.789157    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:48:13.814596    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:48:13.814606    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:48:13.829131    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:48:13.829142    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:48:13.840306    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:48:13.840321    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:48:13.851785    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:48:13.851794    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:48:16.369410    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:48:21.369812    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:48:21.369975    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:48:21.381430    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:48:21.381543    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:48:21.391559    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:48:21.391659    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:48:21.401821    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:48:21.401902    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:48:21.419243    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:48:21.419336    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:48:21.429589    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:48:21.429678    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:48:21.440484    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:48:21.440570    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:48:21.450777    8964 logs.go:276] 0 containers: []
	W0920 10:48:21.450786    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:48:21.450847    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:48:21.461156    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:48:21.461171    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:48:21.461176    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:48:21.472129    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:48:21.472140    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:48:21.513909    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:48:21.513930    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:48:21.541482    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:48:21.541496    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:48:21.554163    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:48:21.554179    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:48:21.566082    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:48:21.566097    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:48:21.591440    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:48:21.591454    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:48:21.607429    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:48:21.607441    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:48:21.644767    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:48:21.644784    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:48:21.659312    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:48:21.659323    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:48:21.671157    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:48:21.671170    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:48:21.688817    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:48:21.688833    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:48:21.705634    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:48:21.705649    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:48:21.717341    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:48:21.717356    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:48:21.721713    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:48:21.721719    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:48:21.735857    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:48:21.735871    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:48:21.749644    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:48:21.749653    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:48:24.263490    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:48:29.266136    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:48:29.266273    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:48:29.279384    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:48:29.279485    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:48:29.290784    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:48:29.290865    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:48:29.302378    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:48:29.302459    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:48:29.313912    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:48:29.314012    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:48:29.324165    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:48:29.324251    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:48:29.335327    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:48:29.335404    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:48:29.345619    8964 logs.go:276] 0 containers: []
	W0920 10:48:29.345630    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:48:29.345688    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:48:29.366608    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:48:29.366628    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:48:29.366634    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:48:29.391790    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:48:29.391804    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:48:29.408663    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:48:29.408678    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:48:29.421491    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:48:29.421506    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:48:29.433287    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:48:29.433302    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:48:29.448309    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:48:29.448318    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:48:29.465406    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:48:29.465416    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:48:29.477529    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:48:29.477540    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:48:29.489521    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:48:29.489531    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:48:29.525009    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:48:29.525019    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:48:29.550190    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:48:29.550206    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:48:29.564734    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:48:29.564745    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:48:29.576899    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:48:29.576911    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:48:29.593837    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:48:29.593850    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:48:29.616955    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:48:29.616967    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:48:29.657762    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:48:29.657774    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:48:29.662311    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:48:29.662319    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:48:32.176894    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:48:37.179078    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:48:37.179281    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:48:37.191310    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:48:37.191400    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:48:37.203300    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:48:37.203391    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:48:37.214621    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:48:37.214696    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:48:37.229547    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:48:37.229633    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:48:37.240517    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:48:37.240593    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:48:37.251836    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:48:37.251905    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:48:37.273768    8964 logs.go:276] 0 containers: []
	W0920 10:48:37.273782    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:48:37.273854    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:48:37.285385    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:48:37.285404    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:48:37.285411    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:48:37.300926    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:48:37.300938    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:48:37.313047    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:48:37.313063    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:48:37.324432    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:48:37.324442    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:48:37.338504    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:48:37.338515    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:48:37.356364    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:48:37.356379    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:48:37.368361    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:48:37.368373    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:48:37.381240    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:48:37.381249    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:48:37.420444    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:48:37.420455    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:48:37.434933    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:48:37.434948    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:48:37.449820    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:48:37.449833    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:48:37.462185    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:48:37.462200    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:48:37.473685    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:48:37.473701    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:48:37.499116    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:48:37.499125    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:48:37.504043    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:48:37.504053    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:48:37.529890    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:48:37.529901    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:48:37.546161    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:48:37.546171    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:48:40.087908    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:48:45.090557    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:48:45.090748    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:48:45.103489    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:48:45.103573    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:48:45.114003    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:48:45.114076    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:48:45.124168    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:48:45.124254    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:48:45.134759    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:48:45.134836    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:48:45.145465    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:48:45.145548    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:48:45.156148    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:48:45.156231    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:48:45.166478    8964 logs.go:276] 0 containers: []
	W0920 10:48:45.166489    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:48:45.166558    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:48:45.176790    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:48:45.176807    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:48:45.176812    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:48:45.188452    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:48:45.188463    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:48:45.202785    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:48:45.202794    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:48:45.215210    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:48:45.215221    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:48:45.227047    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:48:45.227058    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:48:45.239043    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:48:45.239058    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:48:45.243412    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:48:45.243421    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:48:45.255105    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:48:45.255115    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:48:45.271166    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:48:45.271177    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:48:45.296378    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:48:45.296391    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:48:45.308229    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:48:45.308241    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:48:45.348480    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:48:45.348489    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:48:45.382403    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:48:45.382414    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:48:45.396647    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:48:45.396660    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:48:45.414370    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:48:45.414383    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:48:45.439629    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:48:45.439639    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:48:45.453811    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:48:45.453826    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:48:47.969924    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:48:52.972173    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:48:52.972368    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:48:52.993461    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:48:52.993550    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:48:53.004328    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:48:53.004417    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:48:53.014645    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:48:53.014725    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:48:53.025224    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:48:53.025305    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:48:53.035745    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:48:53.035832    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:48:53.054755    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:48:53.054846    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:48:53.065460    8964 logs.go:276] 0 containers: []
	W0920 10:48:53.065472    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:48:53.065543    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:48:53.075623    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:48:53.075639    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:48:53.075644    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:48:53.080162    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:48:53.080167    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:48:53.094045    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:48:53.094055    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:48:53.108344    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:48:53.108360    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:48:53.120553    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:48:53.120568    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:48:53.131667    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:48:53.131675    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:48:53.143383    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:48:53.143395    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:48:53.155291    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:48:53.155302    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:48:53.167161    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:48:53.167172    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:48:53.182119    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:48:53.182130    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:48:53.193616    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:48:53.193627    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:48:53.218611    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:48:53.218621    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:48:53.259807    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:48:53.259828    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:48:53.296617    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:48:53.296628    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:48:53.321911    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:48:53.321929    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:48:53.333006    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:48:53.333017    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:48:53.348982    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:48:53.348992    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:48:55.873424    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:00.875761    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:00.875886    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:00.887672    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:00.887752    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:00.898818    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:00.898894    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:00.910354    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:00.910428    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:00.921860    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:00.921929    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:00.933064    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:00.933141    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:00.943730    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:00.943807    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:00.953994    8964 logs.go:276] 0 containers: []
	W0920 10:49:00.954007    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:00.954072    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:00.968671    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:00.968692    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:00.968697    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:00.989214    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:00.989232    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:01.002537    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:01.002548    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:01.039476    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:01.039488    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:01.053039    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:01.053050    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:01.077731    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:01.077756    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:01.083192    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:01.083203    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:01.098032    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:01.098043    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:01.113635    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:01.113646    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:01.126245    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:01.126258    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:01.140529    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:01.140547    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:01.155795    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:01.155809    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:01.193569    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:01.193577    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:01.219064    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:01.219074    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:01.232554    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:01.232565    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:01.243804    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:01.243814    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:01.254958    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:01.254969    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:03.768734    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:08.771420    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:08.771532    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:08.787978    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:08.788051    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:08.800664    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:08.800749    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:08.811031    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:08.811102    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:08.821953    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:08.822036    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:08.840953    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:08.841037    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:08.856536    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:08.856622    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:08.866424    8964 logs.go:276] 0 containers: []
	W0920 10:49:08.866437    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:08.866498    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:08.877599    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:08.877618    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:08.877623    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:08.891056    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:08.891065    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:08.916542    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:08.916555    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:08.927879    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:08.927890    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:08.940168    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:08.940178    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:08.956989    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:08.957004    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:08.972520    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:08.972531    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:09.008190    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:09.008201    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:09.022451    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:09.022460    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:09.039582    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:09.039593    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:09.056373    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:09.056382    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:09.068971    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:09.068982    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:09.084961    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:09.084970    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:09.125291    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:09.125299    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:09.129577    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:09.129587    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:09.144129    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:09.144138    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:09.158445    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:09.158455    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:11.684976    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:16.687286    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:16.687751    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:16.722226    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:16.722386    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:16.742015    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:16.742148    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:16.756337    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:16.756424    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:16.768671    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:16.768752    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:16.779531    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:16.779603    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:16.790228    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:16.790311    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:16.800543    8964 logs.go:276] 0 containers: []
	W0920 10:49:16.800560    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:16.800623    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:16.811037    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:16.811055    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:16.811060    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:16.822525    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:16.822539    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:16.834136    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:16.834149    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:16.847875    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:16.847884    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:16.860532    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:16.860542    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:16.875571    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:16.875586    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:16.900283    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:16.900298    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:16.917204    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:16.917217    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:16.944371    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:16.944392    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:16.961451    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:16.961469    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:16.978158    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:16.978175    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:16.995688    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:16.995699    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:17.019449    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:17.019456    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:17.030722    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:17.030732    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:17.070241    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:17.070248    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:17.104164    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:17.104174    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:17.109116    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:17.109123    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:19.632178    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:24.634504    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:24.634622    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:24.646562    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:24.646642    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:24.657672    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:24.657763    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:24.668983    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:24.669065    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:24.681141    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:24.681230    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:24.692696    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:24.692775    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:24.705906    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:24.705997    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:24.717097    8964 logs.go:276] 0 containers: []
	W0920 10:49:24.717111    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:24.717183    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:24.728498    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:24.728519    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:24.728525    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:24.733588    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:24.733601    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:24.748195    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:24.748214    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:24.765141    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:24.765151    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:24.789835    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:24.789851    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:24.829759    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:24.829771    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:24.856558    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:24.856579    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:24.871717    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:24.871732    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:24.885386    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:24.885399    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:24.905436    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:24.905451    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:24.918882    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:24.918899    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:24.931178    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:24.931191    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:24.973781    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:24.973800    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:24.993681    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:24.993701    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:25.010877    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:25.010890    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:25.024421    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:25.024436    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:25.041672    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:25.041722    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:27.561425    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:32.563580    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:32.563704    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:32.575508    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:32.575598    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:32.591322    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:32.591404    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:32.603614    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:32.603695    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:32.614675    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:32.614756    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:32.625388    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:32.625481    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:32.636387    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:32.636471    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:32.647240    8964 logs.go:276] 0 containers: []
	W0920 10:49:32.647252    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:32.647325    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:32.658742    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:32.658760    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:32.658766    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:32.674750    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:32.674763    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:32.692912    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:32.692929    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:32.717705    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:32.717724    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:32.760948    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:32.760970    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:32.778449    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:32.778466    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:32.795957    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:32.795971    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:32.808431    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:32.808443    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:32.822303    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:32.822314    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:32.834197    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:32.834211    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:32.838492    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:32.838500    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:32.877086    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:32.877098    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:32.891398    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:32.891410    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:32.906532    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:32.906546    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:32.919875    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:32.919888    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:32.932188    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:32.932200    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:32.960688    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:32.960707    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:35.474921    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:40.477590    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:40.478140    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:40.520367    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:40.520532    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:40.541090    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:40.541230    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:40.562816    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:40.562914    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:40.575009    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:40.575097    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:40.585526    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:40.585614    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:40.597260    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:40.597345    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:40.608422    8964 logs.go:276] 0 containers: []
	W0920 10:49:40.608435    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:40.608512    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:40.619899    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:40.619917    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:40.619923    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:40.638103    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:40.638114    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:40.652127    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:40.652137    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:40.668366    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:40.668375    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:40.703795    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:40.703806    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:40.720218    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:40.720228    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:40.738559    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:40.738568    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:40.750523    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:40.750540    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:40.763997    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:40.764013    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:40.805167    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:40.805187    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:40.809684    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:40.809691    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:40.823877    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:40.823886    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:40.835518    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:40.835528    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:40.853151    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:40.853161    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:40.878908    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:40.878922    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:40.891519    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:40.891530    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:40.902901    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:40.902917    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:43.427674    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:48.428217    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:48.428363    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:48.440606    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:48.440698    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:48.456491    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:48.456581    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:48.467735    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:48.467816    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:48.478420    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:48.478509    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:48.489078    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:48.489159    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:48.500751    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:48.500831    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:48.511852    8964 logs.go:276] 0 containers: []
	W0920 10:49:48.511863    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:48.511936    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:48.527339    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:48.527361    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:48.527367    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:48.531686    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:48.531691    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:48.545475    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:48.545486    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:48.570691    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:48.570706    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:48.584406    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:48.584417    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:48.598784    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:48.598795    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:48.619410    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:48.619421    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:48.630710    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:48.630721    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:48.646306    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:48.646318    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:48.658055    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:48.658066    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:48.670014    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:48.670026    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:48.694032    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:48.694042    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:48.731544    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:48.731555    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:48.743451    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:48.743462    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:48.761530    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:48.761546    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:48.802814    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:48.802826    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:48.815063    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:48.815077    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:51.329774    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:56.330585    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:56.330644    8964 kubeadm.go:597] duration metric: took 4m4.328905667s to restartPrimaryControlPlane
	W0920 10:49:56.330694    8964 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:49:56.330709    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:49:57.300607    8964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:49:57.305638    8964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:49:57.308787    8964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:49:57.311521    8964 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:49:57.311527    8964 kubeadm.go:157] found existing configuration files:
	
	I0920 10:49:57.311557    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf
	I0920 10:49:57.314128    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:49:57.314152    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:49:57.317381    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf
	I0920 10:49:57.320359    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:49:57.320386    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:49:57.322855    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf
	I0920 10:49:57.325643    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:49:57.325671    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:49:57.328556    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf
	I0920 10:49:57.331123    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:49:57.331148    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:49:57.333819    8964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:49:57.351442    8964 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:49:57.351549    8964 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:49:57.398406    8964 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:49:57.398556    8964 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:49:57.398723    8964 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:49:57.447772    8964 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:49:57.451882    8964 out.go:235]   - Generating certificates and keys ...
	I0920 10:49:57.451917    8964 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:49:57.451951    8964 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:49:57.452068    8964 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:49:57.452280    8964 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:49:57.452316    8964 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:49:57.452346    8964 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:49:57.452402    8964 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:49:57.452489    8964 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:49:57.452563    8964 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:49:57.452638    8964 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:49:57.452679    8964 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:49:57.452723    8964 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:49:57.536733    8964 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:49:57.940061    8964 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:49:58.042930    8964 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:49:58.095354    8964 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:49:58.124806    8964 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:49:58.125180    8964 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:49:58.125234    8964 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:49:58.207710    8964 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:49:58.211973    8964 out.go:235]   - Booting up control plane ...
	I0920 10:49:58.212117    8964 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:49:58.212205    8964 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:49:58.212319    8964 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:49:58.212362    8964 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:49:58.213383    8964 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:50:02.717211    8964 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503482 seconds
	I0920 10:50:02.717331    8964 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:50:02.722205    8964 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:50:03.234525    8964 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:50:03.234798    8964 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-097000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:50:03.741609    8964 kubeadm.go:310] [bootstrap-token] Using token: xcvjoh.a860vdhghdggd721
	I0920 10:50:03.747839    8964 out.go:235]   - Configuring RBAC rules ...
	I0920 10:50:03.747920    8964 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:50:03.747987    8964 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:50:03.750412    8964 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:50:03.754539    8964 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:50:03.755603    8964 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:50:03.756871    8964 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:50:03.760516    8964 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:50:03.927026    8964 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:50:04.147098    8964 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:50:04.147634    8964 kubeadm.go:310] 
	I0920 10:50:04.147670    8964 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:50:04.147674    8964 kubeadm.go:310] 
	I0920 10:50:04.147715    8964 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:50:04.147751    8964 kubeadm.go:310] 
	I0920 10:50:04.147768    8964 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:50:04.147816    8964 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:50:04.147849    8964 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:50:04.147851    8964 kubeadm.go:310] 
	I0920 10:50:04.147877    8964 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:50:04.147887    8964 kubeadm.go:310] 
	I0920 10:50:04.147915    8964 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:50:04.147918    8964 kubeadm.go:310] 
	I0920 10:50:04.147943    8964 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:50:04.147980    8964 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:50:04.148024    8964 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:50:04.148031    8964 kubeadm.go:310] 
	I0920 10:50:04.148084    8964 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:50:04.148134    8964 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:50:04.148138    8964 kubeadm.go:310] 
	I0920 10:50:04.148182    8964 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xcvjoh.a860vdhghdggd721 \
	I0920 10:50:04.148241    8964 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:060a9df3d803721427aee4d9db182572971f8fddfdaccc18183246a007d5e636 \
	I0920 10:50:04.148253    8964 kubeadm.go:310] 	--control-plane 
	I0920 10:50:04.148258    8964 kubeadm.go:310] 
	I0920 10:50:04.148306    8964 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:50:04.148312    8964 kubeadm.go:310] 
	I0920 10:50:04.148356    8964 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xcvjoh.a860vdhghdggd721 \
	I0920 10:50:04.148421    8964 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:060a9df3d803721427aee4d9db182572971f8fddfdaccc18183246a007d5e636 
	I0920 10:50:04.148499    8964 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:50:04.148509    8964 cni.go:84] Creating CNI manager for ""
	I0920 10:50:04.148517    8964 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:50:04.154226    8964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:50:04.160160    8964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:50:04.163306    8964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:50:04.168295    8964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:50:04.168356    8964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:50:04.168368    8964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-097000 minikube.k8s.io/updated_at=2024_09_20T10_50_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=running-upgrade-097000 minikube.k8s.io/primary=true
	I0920 10:50:04.213858    8964 ops.go:34] apiserver oom_adj: -16
	I0920 10:50:04.213926    8964 kubeadm.go:1113] duration metric: took 45.626292ms to wait for elevateKubeSystemPrivileges
	I0920 10:50:04.213938    8964 kubeadm.go:394] duration metric: took 4m12.236082375s to StartCluster
	I0920 10:50:04.213947    8964 settings.go:142] acquiring lock: {Name:mk90c7bb0a96d07865bd05b5bab2437d4acfe4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:50:04.214113    8964 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:50:04.214470    8964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/kubeconfig: {Name:mkc202c0538e947b3e0d9844748996d0c112bf36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:50:04.214666    8964 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:50:04.214737    8964 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:50:04.214765    8964 config.go:182] Loaded profile config "running-upgrade-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:50:04.214767    8964 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-097000"
	I0920 10:50:04.214829    8964 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-097000"
	W0920 10:50:04.214835    8964 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:50:04.214770    8964 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-097000"
	I0920 10:50:04.214888    8964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-097000"
	I0920 10:50:04.214861    8964 host.go:66] Checking if "running-upgrade-097000" exists ...
	I0920 10:50:04.216000    8964 kapi.go:59] client config for running-upgrade-097000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/client.key", CAFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f6e030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:50:04.216128    8964 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-097000"
	W0920 10:50:04.216135    8964 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:50:04.216149    8964 host.go:66] Checking if "running-upgrade-097000" exists ...
	I0920 10:50:04.219072    8964 out.go:177] * Verifying Kubernetes components...
	I0920 10:50:04.219415    8964 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:50:04.223165    8964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:50:04.223174    8964 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/running-upgrade-097000/id_rsa Username:docker}
	I0920 10:50:04.227010    8964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:50:04.230055    8964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:50:04.233063    8964 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:50:04.233070    8964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:50:04.233075    8964 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/running-upgrade-097000/id_rsa Username:docker}
	I0920 10:50:04.302739    8964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:50:04.308258    8964 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:50:04.308312    8964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:50:04.312204    8964 api_server.go:72] duration metric: took 97.525917ms to wait for apiserver process to appear ...
	I0920 10:50:04.312211    8964 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:50:04.312218    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:04.318994    8964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:50:04.338124    8964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:50:04.631642    8964 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:50:04.631655    8964 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:50:09.314314    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:09.314354    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:14.314616    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:14.314638    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:19.314968    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:19.314990    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:24.315873    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:24.315914    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:29.317051    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:29.317089    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:34.318132    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:34.318159    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:50:34.633697    8964 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:50:34.638074    8964 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:50:34.646114    8964 addons.go:510] duration metric: took 30.431509s for enable addons: enabled=[storage-provisioner]
	I0920 10:50:39.319368    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:39.319402    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:44.320987    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:44.321038    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:49.321566    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:49.321624    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:54.323822    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:54.323848    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:59.326052    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:59.326098    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:04.328370    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:04.328493    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:04.346365    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:04.346455    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:04.361642    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:04.361728    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:04.371984    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:04.372059    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:04.382890    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:04.382962    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:04.397593    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:04.397666    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:04.410606    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:04.410693    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:04.420768    8964 logs.go:276] 0 containers: []
	W0920 10:51:04.420778    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:04.420842    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:04.431478    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:04.431493    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:04.431499    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:04.445843    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:04.445856    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:04.457990    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:04.458010    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:04.472564    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:04.472575    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:04.484344    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:04.484358    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:04.504028    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:04.504047    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:04.541233    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:04.541252    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:04.577918    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:04.577933    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:04.592195    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:04.592206    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:04.604429    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:04.604440    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:04.617540    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:04.617553    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:04.622252    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:04.622260    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:04.634535    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:04.634547    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:07.160581    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:12.162977    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:12.163199    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:12.180706    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:12.180808    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:12.193851    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:12.193943    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:12.206179    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:12.206256    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:12.217826    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:12.217911    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:12.228091    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:12.228176    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:12.238619    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:12.238696    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:12.248772    8964 logs.go:276] 0 containers: []
	W0920 10:51:12.248787    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:12.248859    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:12.259370    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:12.259386    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:12.259392    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:12.264281    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:12.264288    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:12.278542    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:12.278552    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:12.293326    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:12.293335    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:12.305112    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:12.305122    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:12.316942    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:12.316953    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:12.334738    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:12.334752    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:12.345999    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:12.346015    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:12.379865    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:12.379872    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:12.414408    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:12.414421    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:12.428096    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:12.428105    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:12.440386    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:12.440398    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:12.451971    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:12.451981    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:14.977167    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:19.977629    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:19.977902    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:19.996280    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:19.996400    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:20.009903    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:20.009990    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:20.025305    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:20.025387    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:20.037114    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:20.037205    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:20.048307    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:20.048398    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:20.059227    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:20.059308    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:20.069634    8964 logs.go:276] 0 containers: []
	W0920 10:51:20.069645    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:20.069719    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:20.081914    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:20.081932    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:20.081939    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:20.097408    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:20.097419    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:20.121451    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:20.121470    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:20.132724    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:20.132736    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:20.167609    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:20.167620    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:20.182567    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:20.182583    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:20.196700    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:20.196711    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:20.207987    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:20.207999    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:20.223702    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:20.223718    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:20.235867    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:20.235878    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:20.253755    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:20.253765    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:20.265870    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:20.265881    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:20.299432    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:20.299443    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:22.806277    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:27.809037    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:27.809368    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:27.833536    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:27.833671    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:27.850092    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:27.850187    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:27.863545    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:27.863628    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:27.874612    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:27.874698    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:27.885250    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:27.885339    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:27.895703    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:27.895780    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:27.909174    8964 logs.go:276] 0 containers: []
	W0920 10:51:27.909187    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:27.909256    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:27.919543    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:27.919559    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:27.919565    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:27.933620    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:27.933630    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:27.945459    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:27.945470    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:27.957146    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:27.957159    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:27.983336    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:27.983349    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:27.987951    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:27.987959    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:28.030324    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:28.030336    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:28.045497    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:28.045510    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:28.056980    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:28.056991    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:28.069167    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:28.069179    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:28.083670    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:28.083681    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:28.101538    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:28.101549    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:28.113276    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:28.113292    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:30.650333    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:35.652805    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:35.653395    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:35.689749    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:35.689917    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:35.710177    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:35.710285    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:35.725328    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:35.725424    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:35.737775    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:35.737860    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:35.748460    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:35.748537    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:35.762625    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:35.762701    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:35.772797    8964 logs.go:276] 0 containers: []
	W0920 10:51:35.772811    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:35.772886    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:35.782782    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:35.782797    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:35.782803    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:35.798409    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:35.798423    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:35.810889    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:35.810905    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:35.822816    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:35.822827    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:35.840663    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:35.840676    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:35.852493    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:35.852504    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:35.875406    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:35.875414    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:35.908492    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:35.908503    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:35.912726    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:35.912734    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:35.924793    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:35.924803    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:35.939521    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:35.939531    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:35.951106    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:35.951120    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:35.986586    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:35.986602    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:38.502578    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:43.504913    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:43.505109    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:43.519130    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:43.519218    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:43.530724    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:43.530814    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:43.545490    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:43.545575    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:43.556214    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:43.556295    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:43.566483    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:43.566569    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:43.576972    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:43.577052    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:43.587219    8964 logs.go:276] 0 containers: []
	W0920 10:51:43.587229    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:43.587296    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:43.597337    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:43.597355    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:43.597361    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:43.630864    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:43.630873    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:43.667286    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:43.667302    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:43.681826    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:43.681842    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:43.701484    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:43.701500    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:43.713127    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:43.713141    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:43.724971    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:43.724979    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:43.747817    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:43.747826    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:43.760935    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:43.760952    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:43.765103    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:43.765111    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:43.779241    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:43.779251    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:43.794315    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:43.794331    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:43.812163    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:43.812177    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:46.324508    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:51.327186    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:51.327510    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:51.360616    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:51.360765    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:51.378625    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:51.378765    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:51.392527    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:51.392612    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:51.407582    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:51.407659    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:51.418857    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:51.418939    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:51.430128    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:51.430207    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:51.440442    8964 logs.go:276] 0 containers: []
	W0920 10:51:51.440453    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:51.440517    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:51.451550    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:51.451566    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:51.451572    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:51.464115    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:51.464125    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:51.482166    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:51.482181    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:51.507565    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:51.507573    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:51.541972    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:51.541980    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:51.578896    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:51.578908    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:51.593916    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:51.593927    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:51.608753    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:51.608765    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:51.620263    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:51.620274    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:51.632522    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:51.632538    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:51.636815    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:51.636822    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:51.654257    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:51.654268    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:51.666891    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:51.666902    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:54.181478    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:59.183742    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:59.183914    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:59.197429    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:59.197503    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:59.210124    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:59.210211    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:59.221520    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:59.221607    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:59.232669    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:59.232741    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:59.243702    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:59.243792    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:59.254694    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:59.254775    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:59.264997    8964 logs.go:276] 0 containers: []
	W0920 10:51:59.265009    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:59.265080    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:59.276350    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:59.276365    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:59.276370    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:59.288431    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:59.288441    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:59.304796    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:59.304808    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:59.340609    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:59.340622    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:59.355726    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:59.355738    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:59.367891    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:59.367903    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:59.382674    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:59.382684    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:59.401280    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:59.401290    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:59.413591    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:59.413601    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:59.438822    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:59.438831    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:59.451191    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:59.451204    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:59.487349    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:59.487364    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:59.491969    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:59.491975    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:02.007756    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:07.010357    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:07.010503    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:07.023199    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:07.023285    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:07.034553    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:07.034640    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:07.045217    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:52:07.045302    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:07.056746    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:07.056824    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:07.067040    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:07.067126    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:07.077173    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:07.077253    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:07.092094    8964 logs.go:276] 0 containers: []
	W0920 10:52:07.092105    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:07.092175    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:07.102756    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:07.102772    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:07.102778    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:07.136351    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:07.136360    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:07.140921    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:07.140930    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:07.180224    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:07.180234    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:07.194009    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:07.194020    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:07.209052    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:07.209063    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:07.228997    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:07.229013    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:07.243853    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:07.243864    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:07.255428    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:07.255445    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:07.272348    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:07.272358    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:07.283491    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:07.283499    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:07.308654    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:07.308661    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:07.324927    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:07.324944    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:09.837148    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:14.839550    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:14.839869    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:14.865858    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:14.865998    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:14.889769    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:14.889865    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:14.902675    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:52:14.902762    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:14.913313    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:14.913398    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:14.924686    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:14.924772    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:14.934745    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:14.934825    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:14.944649    8964 logs.go:276] 0 containers: []
	W0920 10:52:14.944661    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:14.944732    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:14.956628    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:14.956644    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:14.956650    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:14.971470    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:14.971479    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:14.983241    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:14.983252    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:14.994799    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:14.994810    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:15.027581    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:15.027588    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:15.031961    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:15.031967    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:15.067290    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:15.067300    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:15.081630    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:15.081641    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:15.096294    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:15.096306    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:15.109727    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:15.109743    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:15.121612    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:15.121624    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:15.133742    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:15.133757    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:15.153295    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:15.153311    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:17.677878    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:22.680252    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:22.680436    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:22.700127    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:22.700222    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:22.710962    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:22.711046    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:22.721934    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:52:22.722011    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:22.732317    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:22.732399    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:22.742215    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:22.742296    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:22.755157    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:22.755244    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:22.765198    8964 logs.go:276] 0 containers: []
	W0920 10:52:22.765209    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:22.765272    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:22.775732    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:22.775750    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:22.775756    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:22.793810    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:22.793822    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:22.805902    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:22.805914    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:22.810526    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:22.810533    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:22.876526    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:22.876536    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:22.897292    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:22.897303    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:22.909628    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:22.909639    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:22.921291    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:22.921300    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:22.935941    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:22.935952    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:22.948122    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:22.948138    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:22.964532    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:22.964541    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:22.999107    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:52:22.999115    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:52:23.010319    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:23.010331    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:23.024592    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:52:23.024609    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:52:23.044705    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:23.044717    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:25.572130    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:30.574503    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:30.574671    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:30.587223    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:30.587297    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:30.597875    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:30.597960    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:30.608816    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:52:30.608905    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:30.619680    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:30.619760    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:30.630176    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:30.630262    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:30.640626    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:30.640701    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:30.651229    8964 logs.go:276] 0 containers: []
	W0920 10:52:30.651243    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:30.651317    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:30.661207    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:30.661225    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:52:30.661231    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:52:30.672326    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:30.672338    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:30.698188    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:30.698199    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:30.709841    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:30.709853    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:30.744814    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:30.744824    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:30.782144    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:30.782154    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:30.800911    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:30.800921    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:30.813141    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:30.813153    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:30.831049    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:30.831059    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:30.835442    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:52:30.835451    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:52:30.846387    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:30.846399    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:30.857747    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:30.857758    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:30.869521    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:30.869535    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:30.884147    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:30.884162    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:30.896328    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:30.896340    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:33.413572    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:38.413006    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:38.413460    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:38.446511    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:38.446686    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:38.469786    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:38.469898    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:38.484680    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:52:38.484770    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:38.501291    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:38.501385    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:38.512151    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:38.512233    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:38.522888    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:38.522982    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:38.533284    8964 logs.go:276] 0 containers: []
	W0920 10:52:38.533297    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:38.533370    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:38.544318    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:38.544334    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:38.544340    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:38.559065    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:38.559076    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:38.573463    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:38.573476    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:38.595629    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:38.595642    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:38.630501    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:38.630508    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:38.634940    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:38.634948    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:38.648802    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:52:38.648813    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:52:38.660326    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:38.660336    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:38.674944    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:38.674955    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:38.710805    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:38.710819    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:38.722761    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:38.722778    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:38.734723    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:38.734734    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:38.760063    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:52:38.760075    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:52:38.771826    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:38.771836    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:38.787258    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:38.787269    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:41.300165    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:46.300480    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:46.300668    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:46.320086    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:46.320185    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:46.331718    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:46.331796    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:46.346176    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:52:46.346263    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:46.357190    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:46.357273    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:46.367167    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:46.367247    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:46.377441    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:46.377529    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:46.387653    8964 logs.go:276] 0 containers: []
	W0920 10:52:46.387665    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:46.387741    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:46.398080    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:46.398098    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:52:46.398104    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:52:46.411104    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:46.411121    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:46.422318    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:46.422330    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:46.436727    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:46.436739    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:46.454552    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:46.454563    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:46.466105    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:46.466116    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:46.477353    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:46.477369    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:46.482245    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:46.482253    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:46.496066    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:52:46.496077    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:52:46.507698    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:46.507713    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:46.532225    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:46.532233    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:46.567083    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:46.567098    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:46.602603    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:46.602619    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:46.617530    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:46.617542    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:46.633037    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:46.633049    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:49.146269    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:54.145558    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:54.145761    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:54.165667    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:54.165771    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:54.179390    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:54.179468    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:54.191188    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:52:54.191275    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:54.202655    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:54.202739    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:54.213181    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:54.213262    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:54.223579    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:54.223663    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:54.233847    8964 logs.go:276] 0 containers: []
	W0920 10:52:54.233857    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:54.233920    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:54.244860    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:54.244877    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:54.244883    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:54.257369    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:54.257379    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:54.271914    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:54.271924    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:54.289549    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:54.289564    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:54.301421    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:54.301436    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:54.307663    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:54.307674    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:54.324213    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:54.324226    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:54.338576    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:52:54.338593    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:52:54.350319    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:52:54.350330    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:52:54.361991    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:54.362007    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:54.373079    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:54.373090    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:54.384871    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:54.384882    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:54.419305    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:54.419314    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:54.454611    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:54.454626    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:54.480136    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:54.480145    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:56.993455    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:01.995179    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:01.995430    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:02.013303    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:02.013411    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:02.025416    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:02.025504    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:02.037009    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:02.037093    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:02.047443    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:02.047522    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:02.058241    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:02.058316    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:02.069119    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:02.069206    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:02.085179    8964 logs.go:276] 0 containers: []
	W0920 10:53:02.085191    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:02.085265    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:02.095610    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:02.095627    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:02.095632    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:02.108755    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:02.108767    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:02.126893    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:02.126905    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:02.152154    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:02.152163    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:02.166379    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:02.166392    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:02.177585    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:02.177596    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:02.188884    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:02.188896    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:02.194141    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:02.194147    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:02.208726    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:02.208742    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:02.222374    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:02.222384    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:02.258371    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:02.258387    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:02.270786    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:02.270796    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:02.282552    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:02.282566    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:02.294450    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:02.294463    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:02.312953    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:02.312969    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:04.850046    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:09.851912    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:09.852157    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:09.872321    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:09.872450    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:09.887020    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:09.887110    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:09.899925    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:09.900012    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:09.910451    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:09.910529    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:09.921531    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:09.921608    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:09.932473    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:09.932544    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:09.942548    8964 logs.go:276] 0 containers: []
	W0920 10:53:09.942559    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:09.942627    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:09.953403    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:09.953420    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:09.953426    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:09.988696    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:09.988703    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:10.002726    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:10.002737    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:10.017970    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:10.017982    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:10.043341    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:10.043351    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:10.079078    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:10.079089    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:10.091138    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:10.091148    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:10.095804    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:10.095810    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:10.111785    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:10.111798    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:10.130407    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:10.130424    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:10.142617    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:10.142630    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:10.156941    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:10.156953    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:10.171341    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:10.171353    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:10.183870    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:10.183883    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:10.196367    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:10.196378    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:12.714073    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:17.716147    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:17.716312    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:17.731833    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:17.731932    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:17.745528    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:17.745626    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:17.756178    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:17.756257    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:17.772046    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:17.772131    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:17.787676    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:17.787760    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:17.798693    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:17.798770    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:17.808766    8964 logs.go:276] 0 containers: []
	W0920 10:53:17.808780    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:17.808869    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:17.819520    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:17.819544    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:17.819550    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:17.836536    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:17.836549    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:17.848522    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:17.848532    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:17.862915    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:17.862925    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:17.881801    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:17.881812    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:17.906481    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:17.906489    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:17.940013    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:17.940025    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:17.951843    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:17.951855    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:17.967051    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:17.967064    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:17.978717    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:17.978730    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:17.990973    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:17.990986    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:17.996186    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:17.996198    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:18.037757    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:18.037768    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:18.049098    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:18.049111    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:18.062580    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:18.062593    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:20.576382    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:25.578670    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:25.578826    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:25.592097    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:25.592201    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:25.603005    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:25.603084    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:25.614411    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:25.614495    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:25.625315    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:25.625401    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:25.636176    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:25.636253    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:25.646766    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:25.646842    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:25.656563    8964 logs.go:276] 0 containers: []
	W0920 10:53:25.656576    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:25.656645    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:25.666968    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:25.666984    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:25.666990    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:25.708289    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:25.708305    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:25.723056    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:25.723066    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:25.744752    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:25.744763    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:25.762396    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:25.762406    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:25.788018    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:25.788028    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:25.792523    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:25.792533    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:25.803801    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:25.803814    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:25.817703    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:25.817718    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:25.828969    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:25.828980    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:25.841529    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:25.841543    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:25.852689    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:25.852701    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:25.887537    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:25.887546    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:25.899468    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:25.899484    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:25.914226    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:25.914241    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:28.426231    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:33.428406    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:33.428540    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:33.440395    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:33.440479    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:33.452309    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:33.452395    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:33.464264    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:33.464350    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:33.476431    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:33.476515    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:33.488169    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:33.488251    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:33.499725    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:33.499808    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:33.510091    8964 logs.go:276] 0 containers: []
	W0920 10:53:33.510105    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:33.510181    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:33.520651    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:33.520668    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:33.520674    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:33.536814    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:33.536825    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:33.549305    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:33.549319    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:33.585943    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:33.585964    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:33.598811    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:33.598826    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:33.612127    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:33.612141    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:33.624741    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:33.624753    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:33.629418    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:33.629430    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:33.646287    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:33.646299    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:33.661442    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:33.661453    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:33.673119    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:33.673133    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:33.709601    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:33.709615    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:33.727116    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:33.727129    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:33.740838    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:33.740854    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:33.759803    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:33.759820    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:36.286804    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:41.288957    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:41.289107    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:41.302223    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:41.302318    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:41.313233    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:41.313325    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:41.324516    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:41.324611    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:41.335614    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:41.335689    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:41.346371    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:41.346439    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:41.358283    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:41.358367    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:41.368655    8964 logs.go:276] 0 containers: []
	W0920 10:53:41.368665    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:41.368736    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:41.378877    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:41.378892    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:41.378898    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:41.392623    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:41.392632    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:41.406230    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:41.406245    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:41.418697    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:41.418710    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:41.430765    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:41.430774    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:41.450084    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:41.450099    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:41.462496    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:41.462512    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:41.497962    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:41.497973    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:41.503138    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:41.503147    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:41.526417    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:41.526427    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:41.563902    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:41.563913    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:41.575610    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:41.575622    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:41.587165    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:41.587174    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:41.601931    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:41.601941    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:41.616935    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:41.616953    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:44.141613    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:49.143810    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:49.143956    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:49.154641    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:49.154738    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:49.165154    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:49.165236    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:49.175429    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:49.175518    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:49.186082    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:49.186166    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:49.196534    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:49.196614    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:49.207124    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:49.207211    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:49.217939    8964 logs.go:276] 0 containers: []
	W0920 10:53:49.217953    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:49.218027    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:49.228477    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:49.228496    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:49.228502    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:49.247592    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:49.247603    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:49.259464    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:49.259476    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:49.271301    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:49.271312    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:49.286432    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:49.286443    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:49.291156    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:49.291163    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:49.302828    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:49.302840    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:49.314452    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:49.314463    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:49.326145    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:49.326156    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:49.361082    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:49.361090    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:49.396546    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:49.396559    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:49.414480    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:49.414492    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:49.426297    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:49.426308    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:49.438914    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:49.438927    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:49.456459    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:49.456470    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:51.982410    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:56.984655    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:56.984835    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:56.997157    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:56.997251    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:57.007788    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:57.007873    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:57.017999    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:57.018082    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:57.028330    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:57.028404    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:57.038862    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:57.038942    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:57.049931    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:57.050009    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:57.060039    8964 logs.go:276] 0 containers: []
	W0920 10:53:57.060053    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:57.060125    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:57.071467    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:57.071485    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:57.071491    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:57.083162    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:57.083173    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:57.097705    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:57.097721    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:57.112299    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:57.112312    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:57.123547    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:57.123560    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:57.135111    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:57.135122    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:57.169313    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:57.169324    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:57.183381    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:57.183397    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:57.200972    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:57.200987    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:57.213389    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:57.213401    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:57.247013    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:57.247023    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:57.251318    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:57.251326    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:57.263148    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:57.263161    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:57.275058    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:57.275070    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:57.286183    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:57.286194    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:59.811486    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:04.813090    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:04.818520    8964 out.go:201] 
	W0920 10:54:04.821335    8964 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0920 10:54:04.821341    8964 out.go:270] * 
	* 
	W0920 10:54:04.821900    8964 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:54:04.836396    8964 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-097000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-20 10:54:04.924864 -0700 PDT m=+1320.794018751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-097000 -n running-upgrade-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-097000 -n running-upgrade-097000: exit status 2 (15.578805041s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-097000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-999000          | force-systemd-flag-999000 | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-463000              | force-systemd-env-463000  | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-463000           | force-systemd-env-463000  | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT | 20 Sep 24 10:44 PDT |
	| start   | -p docker-flags-211000                | docker-flags-211000       | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-999000             | force-systemd-flag-999000 | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-999000          | force-systemd-flag-999000 | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT | 20 Sep 24 10:44 PDT |
	| start   | -p cert-expiration-196000             | cert-expiration-196000    | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-211000 ssh               | docker-flags-211000       | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-211000 ssh               | docker-flags-211000       | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-211000                | docker-flags-211000       | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT | 20 Sep 24 10:44 PDT |
	| start   | -p cert-options-683000                | cert-options-683000       | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-683000 ssh               | cert-options-683000       | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-683000 -- sudo        | cert-options-683000       | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-683000                | cert-options-683000       | jenkins | v1.34.0 | 20 Sep 24 10:44 PDT | 20 Sep 24 10:44 PDT |
	| start   | -p running-upgrade-097000             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 10:44 PDT | 20 Sep 24 10:45 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-097000             | running-upgrade-097000    | jenkins | v1.34.0 | 20 Sep 24 10:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-196000             | cert-expiration-196000    | jenkins | v1.34.0 | 20 Sep 24 10:47 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-196000             | cert-expiration-196000    | jenkins | v1.34.0 | 20 Sep 24 10:47 PDT | 20 Sep 24 10:47 PDT |
	| start   | -p kubernetes-upgrade-279000          | kubernetes-upgrade-279000 | jenkins | v1.34.0 | 20 Sep 24 10:47 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-279000          | kubernetes-upgrade-279000 | jenkins | v1.34.0 | 20 Sep 24 10:47 PDT | 20 Sep 24 10:48 PDT |
	| start   | -p kubernetes-upgrade-279000          | kubernetes-upgrade-279000 | jenkins | v1.34.0 | 20 Sep 24 10:48 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-279000          | kubernetes-upgrade-279000 | jenkins | v1.34.0 | 20 Sep 24 10:48 PDT | 20 Sep 24 10:48 PDT |
	| start   | -p stopped-upgrade-770000             | minikube                  | jenkins | v1.26.0 | 20 Sep 24 10:48 PDT | 20 Sep 24 10:48 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-770000 stop           | minikube                  | jenkins | v1.26.0 | 20 Sep 24 10:48 PDT | 20 Sep 24 10:49 PDT |
	| start   | -p stopped-upgrade-770000             | stopped-upgrade-770000    | jenkins | v1.34.0 | 20 Sep 24 10:49 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 10:49:00
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:49:00.737539    9094 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:49:00.737726    9094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:00.737730    9094 out.go:358] Setting ErrFile to fd 2...
	I0920 10:49:00.737733    9094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:00.737906    9094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:49:00.739100    9094 out.go:352] Setting JSON to false
	I0920 10:49:00.758234    9094 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6511,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:49:00.758308    9094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:49:00.762691    9094 out.go:177] * [stopped-upgrade-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:49:00.769632    9094 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:49:00.769663    9094 notify.go:220] Checking for updates...
	I0920 10:49:00.776692    9094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:49:00.780655    9094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:49:00.783679    9094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:49:00.786698    9094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:49:00.789616    9094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:49:00.793024    9094 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:49:00.796672    9094 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:49:00.799654    9094 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:49:00.803616    9094 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:49:00.810609    9094 start.go:297] selected driver: qemu2
	I0920 10:49:00.810615    9094 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51545 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:49:00.810663    9094 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:49:00.813438    9094 cni.go:84] Creating CNI manager for ""
	I0920 10:49:00.813470    9094 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:49:00.813507    9094 start.go:340] cluster config:
	{Name:stopped-upgrade-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51545 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:49:00.813558    9094 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:49:00.819020    9094 out.go:177] * Starting "stopped-upgrade-770000" primary control-plane node in "stopped-upgrade-770000" cluster
	I0920 10:49:00.822618    9094 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:49:00.822633    9094 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0920 10:49:00.822641    9094 cache.go:56] Caching tarball of preloaded images
	I0920 10:49:00.822693    9094 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:49:00.822699    9094 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0920 10:49:00.822746    9094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/config.json ...
	I0920 10:49:00.823074    9094 start.go:360] acquireMachinesLock for stopped-upgrade-770000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:49:00.823101    9094 start.go:364] duration metric: took 20.25µs to acquireMachinesLock for "stopped-upgrade-770000"
	I0920 10:49:00.823110    9094 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:49:00.823115    9094 fix.go:54] fixHost starting: 
	I0920 10:49:00.823226    9094 fix.go:112] recreateIfNeeded on stopped-upgrade-770000: state=Stopped err=<nil>
	W0920 10:49:00.823234    9094 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:49:00.831612    9094 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-770000" ...
	I0920 10:49:00.875761    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:00.875886    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:00.887672    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:00.887752    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:00.898818    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:00.898894    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:00.910354    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:00.910428    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:00.921860    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:00.921929    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:00.933064    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:00.933141    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:00.943730    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:00.943807    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:00.953994    8964 logs.go:276] 0 containers: []
	W0920 10:49:00.954007    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:00.954072    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:00.968671    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:00.968692    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:00.968697    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:00.989214    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:00.989232    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:01.002537    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:01.002548    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:01.039476    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:01.039488    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:01.053039    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:01.053050    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:01.077731    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:01.077756    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:01.083192    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:01.083203    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:01.098032    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:01.098043    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:01.113635    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:01.113646    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:01.126245    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:01.126258    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:01.140529    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:01.140547    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:01.155795    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:01.155809    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:01.193569    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:01.193577    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:01.219064    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:01.219074    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:01.232554    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:01.232565    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:01.243804    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:01.243814    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:01.254958    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:01.254969    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:03.768734    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:00.835666    9094 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:49:00.835774    9094 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51511-:22,hostfwd=tcp::51512-:2376,hostname=stopped-upgrade-770000 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/disk.qcow2
	I0920 10:49:00.882989    9094 main.go:141] libmachine: STDOUT: 
	I0920 10:49:00.883009    9094 main.go:141] libmachine: STDERR: 
	I0920 10:49:00.883016    9094 main.go:141] libmachine: Waiting for VM to start (ssh -p 51511 docker@127.0.0.1)...
	I0920 10:49:08.771420    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:08.771532    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:08.787978    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:08.788051    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:08.800664    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:08.800749    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:08.811031    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:08.811102    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:08.821953    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:08.822036    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:08.840953    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:08.841037    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:08.856536    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:08.856622    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:08.866424    8964 logs.go:276] 0 containers: []
	W0920 10:49:08.866437    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:08.866498    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:08.877599    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:08.877618    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:08.877623    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:08.891056    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:08.891065    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:08.916542    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:08.916555    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:08.927879    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:08.927890    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:08.940168    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:08.940178    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:08.956989    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:08.957004    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:08.972520    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:08.972531    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:09.008190    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:09.008201    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:09.022451    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:09.022460    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:09.039582    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:09.039593    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:09.056373    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:09.056382    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:09.068971    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:09.068982    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:09.084961    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:09.084970    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:09.125291    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:09.125299    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:09.129577    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:09.129587    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:09.144129    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:09.144138    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:09.158445    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:09.158455    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:11.684976    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:16.687286    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:16.687751    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:16.722226    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:16.722386    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:16.742015    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:16.742148    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:16.756337    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:16.756424    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:16.768671    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:16.768752    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:16.779531    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:16.779603    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:16.790228    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:16.790311    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:16.800543    8964 logs.go:276] 0 containers: []
	W0920 10:49:16.800560    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:16.800623    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:16.811037    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:16.811055    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:16.811060    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:16.822525    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:16.822539    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:16.834136    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:16.834149    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:16.847875    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:16.847884    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:16.860532    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:16.860542    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:16.875571    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:16.875586    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:16.900283    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:16.900298    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:16.917204    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:16.917217    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:16.944371    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:16.944392    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:16.961451    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:16.961469    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:16.978158    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:16.978175    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:16.995688    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:16.995699    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:17.019449    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:17.019456    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:17.030722    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:17.030732    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:17.070241    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:17.070248    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:17.104164    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:17.104174    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:17.109116    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:17.109123    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:19.632178    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:20.572309    9094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/config.json ...
	I0920 10:49:20.573100    9094 machine.go:93] provisionDockerMachine start ...
	I0920 10:49:20.573308    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.573851    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.573869    9094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 10:49:20.647290    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 10:49:20.647323    9094 buildroot.go:166] provisioning hostname "stopped-upgrade-770000"
	I0920 10:49:20.647450    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.647709    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.647726    9094 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-770000 && echo "stopped-upgrade-770000" | sudo tee /etc/hostname
	I0920 10:49:20.716733    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-770000
	
	I0920 10:49:20.716793    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.716926    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.716936    9094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-770000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-770000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-770000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:49:20.778907    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:49:20.778918    9094 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19679-6783/.minikube CaCertPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19679-6783/.minikube}
	I0920 10:49:20.778925    9094 buildroot.go:174] setting up certificates
	I0920 10:49:20.778930    9094 provision.go:84] configureAuth start
	I0920 10:49:20.778934    9094 provision.go:143] copyHostCerts
	I0920 10:49:20.778998    9094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19679-6783/.minikube/cert.pem, removing ...
	I0920 10:49:20.779005    9094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19679-6783/.minikube/cert.pem
	I0920 10:49:20.779249    9094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19679-6783/.minikube/cert.pem (1123 bytes)
	I0920 10:49:20.779426    9094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19679-6783/.minikube/key.pem, removing ...
	I0920 10:49:20.779431    9094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19679-6783/.minikube/key.pem
	I0920 10:49:20.779479    9094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19679-6783/.minikube/key.pem (1675 bytes)
	I0920 10:49:20.779589    9094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.pem, removing ...
	I0920 10:49:20.779593    9094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.pem
	I0920 10:49:20.779638    9094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.pem (1078 bytes)
	I0920 10:49:20.779729    9094 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-770000 san=[127.0.0.1 localhost minikube stopped-upgrade-770000]
	I0920 10:49:20.823212    9094 provision.go:177] copyRemoteCerts
	I0920 10:49:20.823247    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:49:20.823254    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	I0920 10:49:20.853398    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 10:49:20.860281    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 10:49:20.867441    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 10:49:20.874383    9094 provision.go:87] duration metric: took 95.44475ms to configureAuth
	I0920 10:49:20.874392    9094 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:49:20.874500    9094 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:49:20.874551    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.874637    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.874641    9094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:49:20.931824    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:49:20.931841    9094 buildroot.go:70] root file system type: tmpfs
	I0920 10:49:20.931891    9094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:49:20.931953    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.932070    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.932106    9094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:49:20.996315    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:49:20.996378    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.996508    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.996519    9094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:49:21.355125    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 10:49:21.355140    9094 machine.go:96] duration metric: took 782.0315ms to provisionDockerMachine
	I0920 10:49:21.355148    9094 start.go:293] postStartSetup for "stopped-upgrade-770000" (driver="qemu2")
	I0920 10:49:21.355155    9094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:49:21.355225    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:49:21.355235    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	I0920 10:49:21.386300    9094 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:49:21.387601    9094 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 10:49:21.387608    9094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19679-6783/.minikube/addons for local assets ...
	I0920 10:49:21.387688    9094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19679-6783/.minikube/files for local assets ...
	I0920 10:49:21.387785    9094 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem -> 72792.pem in /etc/ssl/certs
	I0920 10:49:21.387885    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 10:49:21.390777    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem --> /etc/ssl/certs/72792.pem (1708 bytes)
	I0920 10:49:21.397872    9094 start.go:296] duration metric: took 42.71925ms for postStartSetup
	I0920 10:49:21.397886    9094 fix.go:56] duration metric: took 20.57484875s for fixHost
	I0920 10:49:21.397932    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:21.398038    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:21.398043    9094 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 10:49:21.453478    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726854561.749706837
	
	I0920 10:49:21.453486    9094 fix.go:216] guest clock: 1726854561.749706837
	I0920 10:49:21.453489    9094 fix.go:229] Guest: 2024-09-20 10:49:21.749706837 -0700 PDT Remote: 2024-09-20 10:49:21.397888 -0700 PDT m=+20.692017418 (delta=351.818837ms)
	I0920 10:49:21.453500    9094 fix.go:200] guest clock delta is within tolerance: 351.818837ms
	I0920 10:49:21.453503    9094 start.go:83] releasing machines lock for "stopped-upgrade-770000", held for 20.630474458s
	I0920 10:49:21.453571    9094 ssh_runner.go:195] Run: cat /version.json
	I0920 10:49:21.453581    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	I0920 10:49:21.453572    9094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:49:21.453618    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	W0920 10:49:21.454151    9094 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51511: connect: connection refused
	I0920 10:49:21.454172    9094 retry.go:31] will retry after 339.055571ms: dial tcp [::1]:51511: connect: connection refused
	W0920 10:49:21.484290    9094 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0920 10:49:21.484337    9094 ssh_runner.go:195] Run: systemctl --version
	I0920 10:49:21.486191    9094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:49:21.487734    9094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:49:21.487766    9094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 10:49:21.491018    9094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 10:49:21.495963    9094 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 10:49:21.495971    9094 start.go:495] detecting cgroup driver to use...
	I0920 10:49:21.496048    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:49:21.502606    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0920 10:49:21.505740    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:49:21.508544    9094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:49:21.508571    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:49:21.511771    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:49:21.515293    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:49:21.518627    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:49:21.521470    9094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:49:21.524246    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:49:21.527615    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 10:49:21.531124    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 10:49:21.534557    9094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:49:21.537028    9094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:49:21.539892    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:21.588038    9094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:49:21.598680    9094 start.go:495] detecting cgroup driver to use...
	I0920 10:49:21.598757    9094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:49:21.604276    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:49:21.610887    9094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:49:21.619723    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:49:21.624665    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:49:21.629461    9094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 10:49:21.679368    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:49:21.684771    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:49:21.690036    9094 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:49:21.691241    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:49:21.694187    9094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 10:49:21.699070    9094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:49:21.765389    9094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:49:21.842844    9094 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:49:21.842901    9094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:49:21.848155    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:21.928957    9094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:49:23.038802    9094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.109832542s)
	I0920 10:49:23.038867    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 10:49:23.043571    9094 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 10:49:23.049619    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:49:23.053961    9094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:49:23.124408    9094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:49:23.196973    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:23.267225    9094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:49:23.273254    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:49:23.278015    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:23.355551    9094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 10:49:23.398536    9094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:49:23.398630    9094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:49:23.401475    9094 start.go:563] Will wait 60s for crictl version
	I0920 10:49:23.401542    9094 ssh_runner.go:195] Run: which crictl
	I0920 10:49:23.402864    9094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:49:23.417218    9094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0920 10:49:23.417307    9094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:49:23.433451    9094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:49:24.634504    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:24.634622    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:24.646562    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:24.646642    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:24.657672    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:24.657763    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:24.668983    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:24.669065    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:24.681141    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:24.681230    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:24.692696    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:24.692775    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:24.705906    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:24.705997    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:24.717097    8964 logs.go:276] 0 containers: []
	W0920 10:49:24.717111    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:24.717183    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:24.728498    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:24.728519    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:24.728525    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:24.733588    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:24.733601    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:24.748195    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:24.748214    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:24.765141    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:24.765151    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:24.789835    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:24.789851    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:24.829759    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:24.829771    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:24.856558    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:24.856579    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:24.871717    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:24.871732    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:24.885386    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:24.885399    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:24.905436    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:24.905451    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:24.918882    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:24.918899    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:24.931178    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:24.931191    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:24.973781    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:24.973800    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:24.993681    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:24.993701    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:25.010877    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:25.010890    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:25.024421    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:25.024436    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:25.041672    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:25.041722    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:23.450081    9094 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0920 10:49:23.450168    9094 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0920 10:49:23.451452    9094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:49:23.455554    9094 kubeadm.go:883] updating cluster {Name:stopped-upgrade-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51545 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0920 10:49:23.455599    9094 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:49:23.455652    9094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:49:23.465735    9094 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:49:23.465744    9094 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:49:23.465805    9094 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:49:23.468711    9094 ssh_runner.go:195] Run: which lz4
	I0920 10:49:23.469980    9094 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 10:49:23.471078    9094 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 10:49:23.471088    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0920 10:49:24.415723    9094 docker.go:649] duration metric: took 945.790125ms to copy over tarball
	I0920 10:49:24.415787    9094 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 10:49:25.585253    9094 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.169457292s)
	I0920 10:49:25.585266    9094 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 10:49:25.600768    9094 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:49:25.603941    9094 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0920 10:49:25.609359    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:25.684770    9094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:49:27.561425    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:27.329191    9094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.644411583s)
	I0920 10:49:27.329315    9094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:49:27.340134    9094 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:49:27.340145    9094 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:49:27.340150    9094 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 10:49:27.345301    9094 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:49:27.347604    9094 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:49:27.349767    9094 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:49:27.349804    9094 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:49:27.351975    9094 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:49:27.352119    9094 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:49:27.353611    9094 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:49:27.353628    9094 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:49:27.354585    9094 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:49:27.355397    9094 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:49:27.355980    9094 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:49:27.356766    9094 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:49:27.357244    9094 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:49:27.357591    9094 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:49:27.358361    9094 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:49:27.359431    9094 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:49:27.753942    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 10:49:27.764744    9094 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0920 10:49:27.764773    9094 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0920 10:49:27.764833    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0920 10:49:27.766667    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:49:27.771525    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:49:27.779849    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:49:27.781128    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:49:27.781237    9094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 10:49:27.790167    9094 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0920 10:49:27.790190    9094 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:49:27.790256    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:49:27.795975    9094 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0920 10:49:27.795995    9094 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:49:27.796054    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:49:27.798937    9094 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0920 10:49:27.798963    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0920 10:49:27.798969    9094 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0920 10:49:27.798988    9094 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:49:27.799036    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:49:27.799357    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:49:27.816169    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0920 10:49:27.822738    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0920 10:49:27.822783    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0920 10:49:27.827389    9094 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0920 10:49:27.827406    9094 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:49:27.827471    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:49:27.828453    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 10:49:27.829527    9094 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 10:49:27.829533    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0920 10:49:27.839983    9094 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:49:27.840133    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:49:27.840383    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0920 10:49:27.844493    9094 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0920 10:49:27.844512    9094 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:49:27.844578    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0920 10:49:27.871040    9094 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0920 10:49:27.871072    9094 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0920 10:49:27.871088    9094 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:49:27.871091    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:49:27.871142    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:49:27.871200    9094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:49:27.881228    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:49:27.881244    9094 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0920 10:49:27.881257    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0920 10:49:27.881357    9094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:49:27.894161    9094 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0920 10:49:27.894205    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0920 10:49:27.981496    9094 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:49:27.981512    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0920 10:49:28.076503    9094 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0920 10:49:28.177312    9094 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:49:28.177327    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0920 10:49:28.267656    9094 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:49:28.267799    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:49:28.331520    9094 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0920 10:49:28.331554    9094 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0920 10:49:28.331574    9094 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:49:28.331650    9094 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:49:28.345746    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:49:28.345888    9094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:49:28.347260    9094 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0920 10:49:28.347501    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0920 10:49:28.375147    9094 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:49:28.375165    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0920 10:49:28.609655    9094 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 10:49:28.609693    9094 cache_images.go:92] duration metric: took 1.269540292s to LoadCachedImages
	W0920 10:49:28.609738    9094 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0920 10:49:28.609746    9094 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0920 10:49:28.609806    9094 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-770000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 10:49:28.609902    9094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:49:28.623073    9094 cni.go:84] Creating CNI manager for ""
	I0920 10:49:28.623086    9094 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:49:28.623096    9094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 10:49:28.623104    9094 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-770000 NodeName:stopped-upgrade-770000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:49:28.623177    9094 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-770000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:49:28.623241    9094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0920 10:49:28.626826    9094 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:49:28.626861    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:49:28.629451    9094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0920 10:49:28.634214    9094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:49:28.639160    9094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0920 10:49:28.644754    9094 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0920 10:49:28.646022    9094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:49:28.649449    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:28.727741    9094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:49:28.733454    9094 certs.go:68] Setting up /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000 for IP: 10.0.2.15
	I0920 10:49:28.733463    9094 certs.go:194] generating shared ca certs ...
	I0920 10:49:28.733473    9094 certs.go:226] acquiring lock for ca certs: {Name:mk223deb0e7531c2ef743391b3102022988e9e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:28.733654    9094 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.key
	I0920 10:49:28.733708    9094 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/proxy-client-ca.key
	I0920 10:49:28.733713    9094 certs.go:256] generating profile certs ...
	I0920 10:49:28.733789    9094 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/client.key
	I0920 10:49:28.733806    9094 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key.95f8aeec
	I0920 10:49:28.733815    9094 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt.95f8aeec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0920 10:49:28.907055    9094 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt.95f8aeec ...
	I0920 10:49:28.907072    9094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt.95f8aeec: {Name:mkd934f6f29ee3f1a97421450aecdc94ca438ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:28.908540    9094 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key.95f8aeec ...
	I0920 10:49:28.908547    9094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key.95f8aeec: {Name:mk82aa04d4220c51f383542e5fbc9e62cb636def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:28.908710    9094 certs.go:381] copying /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt.95f8aeec -> /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt
	I0920 10:49:28.908854    9094 certs.go:385] copying /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key.95f8aeec -> /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key
	I0920 10:49:28.909014    9094 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/proxy-client.key
	I0920 10:49:28.909153    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/7279.pem (1338 bytes)
	W0920 10:49:28.909183    9094 certs.go:480] ignoring /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/7279_empty.pem, impossibly tiny 0 bytes
	I0920 10:49:28.909190    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 10:49:28.909216    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem (1078 bytes)
	I0920 10:49:28.909245    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:49:28.909268    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/key.pem (1675 bytes)
	I0920 10:49:28.909320    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem (1708 bytes)
	I0920 10:49:28.909650    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:49:28.916485    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 10:49:28.923671    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:49:28.931231    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 10:49:28.937754    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 10:49:28.944422    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 10:49:28.951616    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:49:28.959070    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 10:49:28.966221    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem --> /usr/share/ca-certificates/72792.pem (1708 bytes)
	I0920 10:49:28.972930    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:49:28.979900    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/7279.pem --> /usr/share/ca-certificates/7279.pem (1338 bytes)
	I0920 10:49:28.987252    9094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:49:28.992735    9094 ssh_runner.go:195] Run: openssl version
	I0920 10:49:28.994644    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72792.pem && ln -fs /usr/share/ca-certificates/72792.pem /etc/ssl/certs/72792.pem"
	I0920 10:49:28.997542    9094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72792.pem
	I0920 10:49:28.998991    9094 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:32 /usr/share/ca-certificates/72792.pem
	I0920 10:49:28.999022    9094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72792.pem
	I0920 10:49:29.000961    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72792.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:49:29.004135    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:49:29.007802    9094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:49:29.009451    9094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:49:29.009476    9094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:49:29.011302    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:49:29.014488    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7279.pem && ln -fs /usr/share/ca-certificates/7279.pem /etc/ssl/certs/7279.pem"
	I0920 10:49:29.017722    9094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7279.pem
	I0920 10:49:29.019500    9094 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:32 /usr/share/ca-certificates/7279.pem
	I0920 10:49:29.019558    9094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7279.pem
	I0920 10:49:29.021721    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7279.pem /etc/ssl/certs/51391683.0"
	I0920 10:49:29.025011    9094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 10:49:29.026880    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:49:29.029519    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:49:29.031887    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:49:29.034652    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:49:29.037118    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:49:29.039605    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:49:29.042689    9094 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51545 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:49:29.042800    9094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:49:29.054063    9094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:49:29.057189    9094 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 10:49:29.057194    9094 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 10:49:29.057221    9094 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:49:29.059991    9094 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:49:29.060288    9094 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-770000" does not appear in /Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:49:29.060384    9094 kubeconfig.go:62] /Users/jenkins/minikube-integration/19679-6783/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-770000" cluster setting kubeconfig missing "stopped-upgrade-770000" context setting]
	I0920 10:49:29.060560    9094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/kubeconfig: {Name:mkc202c0538e947b3e0d9844748996d0c112bf36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:29.061052    9094 kapi.go:59] client config for stopped-upgrade-770000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/client.key", CAFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1026aa030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:49:29.061417    9094 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:49:29.064007    9094 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-770000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0920 10:49:29.064015    9094 kubeadm.go:1160] stopping kube-system containers ...
	I0920 10:49:29.064070    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:49:29.075085    9094 docker.go:483] Stopping containers: [0f1f6ae5b381 e7cf43c8a211 cd1e5a8150d3 07e2780d69fa 4d8808795719 0efea235af05 d9ea4bef2395 d9687b348b64]
	I0920 10:49:29.075169    9094 ssh_runner.go:195] Run: docker stop 0f1f6ae5b381 e7cf43c8a211 cd1e5a8150d3 07e2780d69fa 4d8808795719 0efea235af05 d9ea4bef2395 d9687b348b64
	I0920 10:49:29.086241    9094 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:49:29.091416    9094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:49:29.094541    9094 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:49:29.094554    9094 kubeadm.go:157] found existing configuration files:
	
	I0920 10:49:29.094579    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/admin.conf
	I0920 10:49:29.097112    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:49:29.097138    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:49:29.099951    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/kubelet.conf
	I0920 10:49:29.102843    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:49:29.102868    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:49:29.105321    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/controller-manager.conf
	I0920 10:49:29.107888    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:49:29.107916    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:49:29.111105    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/scheduler.conf
	I0920 10:49:29.113811    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:49:29.113839    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:49:29.116476    9094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:49:29.119565    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:49:29.143949    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:49:29.503407    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:49:29.625674    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:49:29.651956    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:49:29.670162    9094 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:49:29.670251    9094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:49:30.172477    9094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:49:30.672328    9094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:49:30.676386    9094 api_server.go:72] duration metric: took 1.006229083s to wait for apiserver process to appear ...
	I0920 10:49:30.676395    9094 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:49:30.676406    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:32.563580    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:32.563704    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:32.575508    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:32.575598    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:32.591322    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:32.591404    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:32.603614    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:32.603695    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:32.614675    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:32.614756    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:32.625388    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:32.625481    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:32.636387    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:32.636471    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:32.647240    8964 logs.go:276] 0 containers: []
	W0920 10:49:32.647252    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:32.647325    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:32.658742    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:32.658760    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:32.658766    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:32.674750    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:32.674763    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:32.692912    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:32.692929    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:32.717705    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:32.717724    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:32.760948    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:32.760970    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:32.778449    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:32.778466    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:32.795957    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:32.795971    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:32.808431    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:32.808443    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:32.822303    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:32.822314    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:32.834197    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:32.834211    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:32.838492    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:32.838500    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:32.877086    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:32.877098    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:32.891398    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:32.891410    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:32.906532    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:32.906546    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:32.919875    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:32.919888    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:32.932188    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:32.932200    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:32.960688    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:32.960707    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:35.474921    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:35.677987    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:35.678037    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:40.477590    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:40.478140    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:40.520367    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:40.520532    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:40.541090    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:40.541230    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:40.562816    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:40.562914    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:40.575009    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:40.575097    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:40.585526    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:40.585614    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:40.597260    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:40.597345    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:40.608422    8964 logs.go:276] 0 containers: []
	W0920 10:49:40.608435    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:40.608512    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:40.619899    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:40.619917    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:40.619923    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:40.638103    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:40.638114    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:40.652127    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:40.652137    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:40.668366    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:40.668375    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:40.678632    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:40.678652    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:40.703795    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:40.703806    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:40.720218    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:40.720228    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:40.738559    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:40.738568    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:40.750523    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:40.750540    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:40.763997    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:40.764013    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:40.805167    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:40.805187    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:40.809684    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:40.809691    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:40.823877    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:40.823886    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:40.835518    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:40.835528    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:40.853151    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:40.853161    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:40.878908    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:40.878922    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:40.891519    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:40.891530    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:40.902901    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:40.902917    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:43.427674    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:45.678915    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:45.678976    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:48.428217    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:48.428363    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:49:48.440606    8964 logs.go:276] 2 containers: [6a6e92b2a3ea c6218011a3d3]
	I0920 10:49:48.440698    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:49:48.456491    8964 logs.go:276] 2 containers: [9e42cd4d5c4d 21241ebf186a]
	I0920 10:49:48.456581    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:49:48.467735    8964 logs.go:276] 1 containers: [a3349c436ced]
	I0920 10:49:48.467816    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:49:48.478420    8964 logs.go:276] 2 containers: [800bb19929f1 dca9f7a0c338]
	I0920 10:49:48.478509    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:49:48.489078    8964 logs.go:276] 1 containers: [390b0f1242bb]
	I0920 10:49:48.489159    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:49:48.500751    8964 logs.go:276] 2 containers: [4c9e14afabaa d901a9e09564]
	I0920 10:49:48.500831    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:49:48.511852    8964 logs.go:276] 0 containers: []
	W0920 10:49:48.511863    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:49:48.511936    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:49:48.527339    8964 logs.go:276] 2 containers: [11b3ea4e38dc f3d76f43ab9d]
	I0920 10:49:48.527361    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:49:48.527367    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:49:48.531686    8964 logs.go:123] Gathering logs for kube-apiserver [6a6e92b2a3ea] ...
	I0920 10:49:48.531691    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a6e92b2a3ea"
	I0920 10:49:48.545475    8964 logs.go:123] Gathering logs for kube-apiserver [c6218011a3d3] ...
	I0920 10:49:48.545486    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6218011a3d3"
	I0920 10:49:48.570691    8964 logs.go:123] Gathering logs for etcd [9e42cd4d5c4d] ...
	I0920 10:49:48.570706    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e42cd4d5c4d"
	I0920 10:49:48.584406    8964 logs.go:123] Gathering logs for etcd [21241ebf186a] ...
	I0920 10:49:48.584417    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21241ebf186a"
	I0920 10:49:48.598784    8964 logs.go:123] Gathering logs for coredns [a3349c436ced] ...
	I0920 10:49:48.598795    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3349c436ced"
	I0920 10:49:48.619410    8964 logs.go:123] Gathering logs for kube-scheduler [800bb19929f1] ...
	I0920 10:49:48.619421    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800bb19929f1"
	I0920 10:49:48.630710    8964 logs.go:123] Gathering logs for kube-scheduler [dca9f7a0c338] ...
	I0920 10:49:48.630721    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca9f7a0c338"
	I0920 10:49:48.646306    8964 logs.go:123] Gathering logs for kube-proxy [390b0f1242bb] ...
	I0920 10:49:48.646318    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390b0f1242bb"
	I0920 10:49:48.658055    8964 logs.go:123] Gathering logs for storage-provisioner [f3d76f43ab9d] ...
	I0920 10:49:48.658066    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3d76f43ab9d"
	I0920 10:49:48.670014    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:49:48.670026    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:49:48.694032    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:49:48.694042    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:49:48.731544    8964 logs.go:123] Gathering logs for storage-provisioner [11b3ea4e38dc] ...
	I0920 10:49:48.731555    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b3ea4e38dc"
	I0920 10:49:48.743451    8964 logs.go:123] Gathering logs for kube-controller-manager [4c9e14afabaa] ...
	I0920 10:49:48.743462    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c9e14afabaa"
	I0920 10:49:48.761530    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:49:48.761546    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:49:48.802814    8964 logs.go:123] Gathering logs for kube-controller-manager [d901a9e09564] ...
	I0920 10:49:48.802826    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d901a9e09564"
	I0920 10:49:48.815063    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:49:48.815077    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:49:50.679688    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:50.679799    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:51.329774    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:55.681047    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:55.681154    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:56.330585    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:56.330644    8964 kubeadm.go:597] duration metric: took 4m4.328905667s to restartPrimaryControlPlane
	W0920 10:49:56.330694    8964 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:49:56.330709    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:49:57.300607    8964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:49:57.305638    8964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:49:57.308787    8964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:49:57.311521    8964 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:49:57.311527    8964 kubeadm.go:157] found existing configuration files:
	
	I0920 10:49:57.311557    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf
	I0920 10:49:57.314128    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:49:57.314152    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:49:57.317381    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf
	I0920 10:49:57.320359    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:49:57.320386    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:49:57.322855    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf
	I0920 10:49:57.325643    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:49:57.325671    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:49:57.328556    8964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf
	I0920 10:49:57.331123    8964 kubeadm.go:163] "https://control-plane.minikube.internal:51293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51293 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:49:57.331148    8964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:49:57.333819    8964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:49:57.351442    8964 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:49:57.351549    8964 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:49:57.398406    8964 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:49:57.398556    8964 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:49:57.398723    8964 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:49:57.447772    8964 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:49:57.451882    8964 out.go:235]   - Generating certificates and keys ...
	I0920 10:49:57.451917    8964 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:49:57.451951    8964 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:49:57.452068    8964 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:49:57.452280    8964 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:49:57.452316    8964 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:49:57.452346    8964 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:49:57.452402    8964 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:49:57.452489    8964 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:49:57.452563    8964 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:49:57.452638    8964 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:49:57.452679    8964 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:49:57.452723    8964 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:49:57.536733    8964 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:49:57.940061    8964 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:49:58.042930    8964 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:49:58.095354    8964 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:49:58.124806    8964 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:49:58.125180    8964 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:49:58.125234    8964 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:49:58.207710    8964 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:49:58.211973    8964 out.go:235]   - Booting up control plane ...
	I0920 10:49:58.212117    8964 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:49:58.212205    8964 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:49:58.212319    8964 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:49:58.212362    8964 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:49:58.213383    8964 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:50:00.682427    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:00.682452    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:02.717211    8964 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503482 seconds
	I0920 10:50:02.717331    8964 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:50:02.722205    8964 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:50:03.234525    8964 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:50:03.234798    8964 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-097000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:50:03.741609    8964 kubeadm.go:310] [bootstrap-token] Using token: xcvjoh.a860vdhghdggd721
	I0920 10:50:03.747839    8964 out.go:235]   - Configuring RBAC rules ...
	I0920 10:50:03.747920    8964 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:50:03.747987    8964 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:50:03.750412    8964 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:50:03.754539    8964 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:50:03.755603    8964 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:50:03.756871    8964 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:50:03.760516    8964 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:50:03.927026    8964 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:50:04.147098    8964 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:50:04.147634    8964 kubeadm.go:310] 
	I0920 10:50:04.147670    8964 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:50:04.147674    8964 kubeadm.go:310] 
	I0920 10:50:04.147715    8964 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:50:04.147751    8964 kubeadm.go:310] 
	I0920 10:50:04.147768    8964 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:50:04.147816    8964 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:50:04.147849    8964 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:50:04.147851    8964 kubeadm.go:310] 
	I0920 10:50:04.147877    8964 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:50:04.147887    8964 kubeadm.go:310] 
	I0920 10:50:04.147915    8964 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:50:04.147918    8964 kubeadm.go:310] 
	I0920 10:50:04.147943    8964 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:50:04.147980    8964 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:50:04.148024    8964 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:50:04.148031    8964 kubeadm.go:310] 
	I0920 10:50:04.148084    8964 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:50:04.148134    8964 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:50:04.148138    8964 kubeadm.go:310] 
	I0920 10:50:04.148182    8964 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xcvjoh.a860vdhghdggd721 \
	I0920 10:50:04.148241    8964 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:060a9df3d803721427aee4d9db182572971f8fddfdaccc18183246a007d5e636 \
	I0920 10:50:04.148253    8964 kubeadm.go:310] 	--control-plane 
	I0920 10:50:04.148258    8964 kubeadm.go:310] 
	I0920 10:50:04.148306    8964 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:50:04.148312    8964 kubeadm.go:310] 
	I0920 10:50:04.148356    8964 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xcvjoh.a860vdhghdggd721 \
	I0920 10:50:04.148421    8964 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:060a9df3d803721427aee4d9db182572971f8fddfdaccc18183246a007d5e636 
	I0920 10:50:04.148499    8964 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:50:04.148509    8964 cni.go:84] Creating CNI manager for ""
	I0920 10:50:04.148517    8964 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:50:04.154226    8964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:50:04.160160    8964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:50:04.163306    8964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:50:04.168295    8964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:50:04.168356    8964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:50:04.168368    8964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-097000 minikube.k8s.io/updated_at=2024_09_20T10_50_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=running-upgrade-097000 minikube.k8s.io/primary=true
	I0920 10:50:04.213858    8964 ops.go:34] apiserver oom_adj: -16
	I0920 10:50:04.213926    8964 kubeadm.go:1113] duration metric: took 45.626292ms to wait for elevateKubeSystemPrivileges
	I0920 10:50:04.213938    8964 kubeadm.go:394] duration metric: took 4m12.236082375s to StartCluster
	I0920 10:50:04.213947    8964 settings.go:142] acquiring lock: {Name:mk90c7bb0a96d07865bd05b5bab2437d4acfe4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:50:04.214113    8964 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:50:04.214470    8964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/kubeconfig: {Name:mkc202c0538e947b3e0d9844748996d0c112bf36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:50:04.214666    8964 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:50:04.214737    8964 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:50:04.214765    8964 config.go:182] Loaded profile config "running-upgrade-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:50:04.214767    8964 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-097000"
	I0920 10:50:04.214829    8964 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-097000"
	W0920 10:50:04.214835    8964 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:50:04.214770    8964 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-097000"
	I0920 10:50:04.214888    8964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-097000"
	I0920 10:50:04.214861    8964 host.go:66] Checking if "running-upgrade-097000" exists ...
	I0920 10:50:04.216000    8964 kapi.go:59] client config for running-upgrade-097000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/running-upgrade-097000/client.key", CAFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f6e030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:50:04.216128    8964 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-097000"
	W0920 10:50:04.216135    8964 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:50:04.216149    8964 host.go:66] Checking if "running-upgrade-097000" exists ...
	I0920 10:50:04.219072    8964 out.go:177] * Verifying Kubernetes components...
	I0920 10:50:04.219415    8964 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:50:04.223165    8964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:50:04.223174    8964 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/running-upgrade-097000/id_rsa Username:docker}
	I0920 10:50:04.227010    8964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:50:04.230055    8964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:50:04.233063    8964 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:50:04.233070    8964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:50:04.233075    8964 sshutil.go:53] new ssh client: &{IP:localhost Port:51261 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/running-upgrade-097000/id_rsa Username:docker}
	I0920 10:50:04.302739    8964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:50:04.308258    8964 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:50:04.308312    8964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:50:04.312204    8964 api_server.go:72] duration metric: took 97.525917ms to wait for apiserver process to appear ...
	I0920 10:50:04.312211    8964 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:50:04.312218    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:04.318994    8964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:50:04.338124    8964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:50:04.631642    8964 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:50:04.631655    8964 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:50:05.683787    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:05.683885    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:09.314314    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:09.314354    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:10.685951    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:10.686031    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:14.314616    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:14.314638    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:15.688583    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:15.688622    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:19.314968    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:19.314990    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:20.689399    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:20.689424    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:24.315873    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:24.315914    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:25.691775    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:25.691858    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:29.317051    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:29.317089    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:30.692538    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:30.692648    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:50:30.704089    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:50:30.704176    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:50:30.714866    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:50:30.714952    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:50:30.725545    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:50:30.725617    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:50:30.736000    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:50:30.736091    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:50:34.318132    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:34.318159    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:50:34.633697    8964 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:50:34.638074    8964 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:50:34.646114    8964 addons.go:510] duration metric: took 30.431509s for enable addons: enabled=[storage-provisioner]
	I0920 10:50:30.746567    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:50:30.746641    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:50:30.757578    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:50:30.757664    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:50:30.767548    9094 logs.go:276] 0 containers: []
	W0920 10:50:30.767564    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:50:30.767635    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:50:30.778360    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:50:30.778377    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:50:30.778382    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:50:30.783159    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:50:30.783165    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:50:30.821246    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:50:30.821256    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:50:30.833191    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:50:30.833204    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:50:30.851134    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:50:30.851147    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:50:30.863048    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:50:30.863061    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:50:30.887028    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:50:30.887042    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:50:30.903060    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:50:30.903073    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:50:30.914902    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:50:30.914916    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:50:30.936283    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:50:30.936297    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:50:30.978004    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:50:30.978018    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:50:30.993220    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:50:30.993236    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:50:31.004007    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:50:31.004018    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:50:31.018395    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:50:31.018408    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:50:31.029790    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:50:31.029802    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:50:31.057378    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:50:31.057388    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:50:33.654730    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:39.319368    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:39.319402    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:38.657142    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:38.657479    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:50:38.684326    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:50:38.684450    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:50:38.706495    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:50:38.706597    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:50:38.724707    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:50:38.724791    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:50:38.735360    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:50:38.735447    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:50:38.746078    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:50:38.746166    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:50:38.756946    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:50:38.757025    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:50:38.767697    9094 logs.go:276] 0 containers: []
	W0920 10:50:38.767710    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:50:38.767781    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:50:38.778364    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:50:38.778392    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:50:38.778399    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:50:38.789953    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:50:38.789963    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:50:38.830476    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:50:38.830484    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:50:38.867404    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:50:38.867416    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:50:38.878952    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:50:38.878965    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:50:38.894939    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:50:38.894951    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:50:38.909686    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:50:38.909701    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:50:38.923831    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:50:38.923843    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:50:38.927923    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:50:38.927929    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:50:38.963377    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:50:38.963388    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:50:38.978589    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:50:38.978601    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:50:39.004186    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:50:39.004196    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:50:39.016633    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:50:39.016647    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:50:39.031079    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:50:39.031093    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:50:39.047714    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:50:39.047729    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:50:39.060567    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:50:39.060578    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:50:44.320987    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:44.321038    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:41.580160    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:49.321566    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:49.321624    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:46.582443    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:46.582788    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:50:46.609243    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:50:46.609389    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:50:46.626817    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:50:46.626928    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:50:46.639878    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:50:46.639961    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:50:46.651144    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:50:46.651217    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:50:46.661664    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:50:46.661755    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:50:46.672046    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:50:46.672129    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:50:46.682390    9094 logs.go:276] 0 containers: []
	W0920 10:50:46.682401    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:50:46.682469    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:50:46.692970    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:50:46.692988    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:50:46.692993    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:50:46.704165    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:50:46.704175    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:50:46.728105    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:50:46.728115    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:50:46.766536    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:50:46.766554    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:50:46.784488    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:50:46.784499    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:50:46.801807    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:50:46.801819    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:50:46.819515    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:50:46.819525    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:50:46.823728    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:50:46.823735    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:50:46.859255    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:50:46.859267    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:50:46.871453    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:50:46.871464    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:50:46.884953    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:50:46.884962    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:50:46.898551    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:50:46.898561    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:50:46.936655    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:50:46.936666    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:50:46.947709    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:50:46.947721    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:50:46.959793    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:50:46.959805    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:50:46.973866    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:50:46.973879    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:50:49.487960    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:54.323822    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:54.323848    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:54.489114    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:54.489276    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:50:54.502320    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:50:54.502410    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:50:54.513506    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:50:54.513590    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:50:54.524030    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:50:54.524113    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:50:54.534292    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:50:54.534378    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:50:54.544500    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:50:54.544580    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:50:54.555394    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:50:54.555471    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:50:54.565635    9094 logs.go:276] 0 containers: []
	W0920 10:50:54.565647    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:50:54.565718    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:50:54.575664    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:50:54.575680    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:50:54.575685    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:50:54.587295    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:50:54.587306    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:50:54.598882    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:50:54.598894    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:50:54.623985    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:50:54.623996    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:50:54.628163    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:50:54.628170    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:50:54.644202    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:50:54.644212    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:50:54.658656    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:50:54.658666    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:50:54.672228    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:50:54.672237    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:50:54.707707    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:50:54.707722    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:50:54.722497    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:50:54.722507    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:50:54.734287    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:50:54.734299    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:50:54.746248    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:50:54.746259    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:50:54.764658    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:50:54.764672    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:50:54.776276    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:50:54.776292    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:50:54.813198    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:50:54.813212    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:50:54.861283    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:50:54.861296    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:50:59.326052    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:59.326098    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:57.381490    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:04.328370    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:04.328493    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:04.346365    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:04.346455    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:04.361642    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:04.361728    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:04.371984    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:04.372059    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:04.382890    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:04.382962    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:04.397593    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:04.397666    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:04.410606    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:04.410693    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:04.420768    8964 logs.go:276] 0 containers: []
	W0920 10:51:04.420778    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:04.420842    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:04.431478    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:04.431493    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:04.431499    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:04.445843    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:04.445856    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:04.457990    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:04.458010    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:04.472564    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:04.472575    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:04.484344    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:04.484358    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:04.504028    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:04.504047    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:04.541233    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:04.541252    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:04.577918    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:04.577933    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:04.592195    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:04.592206    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:04.604429    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:04.604440    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:04.617540    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:04.617553    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:04.622252    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:04.622260    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:04.634535    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:04.634547    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:02.384244    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:02.384432    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:02.397134    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:02.397229    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:02.408421    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:02.408523    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:02.418937    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:02.419021    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:02.429390    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:02.429472    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:02.441848    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:02.441928    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:02.452469    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:02.452552    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:02.462902    9094 logs.go:276] 0 containers: []
	W0920 10:51:02.462912    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:02.462979    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:02.474424    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:02.474445    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:02.474451    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:02.511187    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:02.511198    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:02.525363    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:02.525373    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:02.529490    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:02.529500    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:02.540856    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:02.540867    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:02.566266    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:02.566274    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:02.578733    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:02.578748    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:02.593423    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:02.593432    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:02.605057    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:02.605070    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:02.616585    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:02.616598    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:02.634797    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:02.634811    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:02.648348    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:02.648358    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:02.660009    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:02.660021    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:02.695968    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:02.695981    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:02.735562    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:02.735578    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:02.749714    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:02.749728    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:05.266625    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:07.160581    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:10.268917    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:10.269199    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:10.294089    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:10.294250    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:10.314827    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:10.314921    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:10.331915    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:10.332000    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:10.342513    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:10.342605    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:10.354149    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:10.354232    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:10.364957    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:10.365041    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:10.375514    9094 logs.go:276] 0 containers: []
	W0920 10:51:10.375528    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:10.375594    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:10.385957    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:10.385973    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:10.385979    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:10.390587    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:10.390594    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:10.406434    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:10.406446    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:10.428172    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:10.428186    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:10.453740    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:10.453753    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:10.488892    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:10.488905    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:10.504074    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:10.504090    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:10.546163    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:10.546174    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:10.560742    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:10.560752    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:10.575178    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:10.575193    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:10.613429    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:10.613440    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:10.629041    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:10.629058    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:10.641405    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:10.641415    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:10.656041    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:10.656051    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:10.667292    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:10.667304    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:10.681188    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:10.681199    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:12.162977    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:12.163199    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:12.180706    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:12.180808    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:12.193851    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:12.193943    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:12.206179    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:12.206256    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:12.217826    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:12.217911    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:12.228091    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:12.228176    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:12.238619    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:12.238696    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:12.248772    8964 logs.go:276] 0 containers: []
	W0920 10:51:12.248787    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:12.248859    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:12.259370    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:12.259386    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:12.259392    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:12.264281    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:12.264288    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:12.278542    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:12.278552    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:12.293326    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:12.293335    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:12.305112    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:12.305122    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:12.316942    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:12.316953    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:12.334738    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:12.334752    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:12.345999    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:12.346015    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:12.379865    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:12.379872    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:12.414408    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:12.414421    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:12.428096    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:12.428105    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:12.440386    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:12.440398    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:12.451971    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:12.451981    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:14.977167    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:13.195142    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:19.977629    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:19.977902    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:19.996280    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:19.996400    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:20.009903    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:20.009990    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:20.025305    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:20.025387    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:20.037114    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:20.037205    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:20.048307    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:20.048398    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:20.059227    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:20.059308    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:20.069634    8964 logs.go:276] 0 containers: []
	W0920 10:51:20.069645    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:20.069719    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:20.081914    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:20.081932    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:20.081939    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:20.097408    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:20.097419    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:20.121451    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:20.121470    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:20.132724    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:20.132736    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:20.167609    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:20.167620    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:20.182567    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:20.182583    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:20.196700    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:20.196711    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:20.207987    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:20.207999    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:20.223702    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:20.223718    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:20.235867    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:20.235878    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:20.253755    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:20.253765    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:20.265870    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:20.265881    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:20.299432    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:20.299443    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:18.197623    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:18.198068    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:18.231968    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:18.232127    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:18.250875    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:18.250976    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:18.264461    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:18.264551    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:18.276389    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:18.276476    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:18.287037    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:18.287122    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:18.297464    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:18.297546    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:18.308688    9094 logs.go:276] 0 containers: []
	W0920 10:51:18.308699    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:18.308771    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:18.319847    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:18.319866    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:18.319871    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:18.334520    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:18.334530    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:18.352249    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:18.352260    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:18.364353    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:18.364363    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:18.378571    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:18.378579    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:18.390123    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:18.390135    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:18.395035    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:18.395043    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:18.431102    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:18.431113    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:18.470649    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:18.470667    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:18.483121    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:18.483135    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:18.524045    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:18.524058    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:18.535808    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:18.535821    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:18.551821    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:18.551831    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:18.569356    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:18.569366    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:18.583595    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:18.583607    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:18.601139    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:18.601155    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:22.806277    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:21.130294    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:27.809037    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:27.809368    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:27.833536    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:27.833671    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:27.850092    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:27.850187    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:27.863545    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:27.863628    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:27.874612    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:27.874698    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:27.885250    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:27.885339    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:27.895703    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:27.895780    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:27.909174    8964 logs.go:276] 0 containers: []
	W0920 10:51:27.909187    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:27.909256    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:27.919543    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:27.919559    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:27.919565    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:27.933620    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:27.933630    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:27.945459    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:27.945470    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:27.957146    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:27.957159    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:27.983336    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:27.983349    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:27.987951    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:27.987959    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:28.030324    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:28.030336    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:28.045497    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:28.045510    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:28.056980    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:28.056991    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:28.069167    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:28.069179    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:28.083670    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:28.083681    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:28.101538    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:28.101549    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:28.113276    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:28.113292    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:30.650333    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:26.132669    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:26.132981    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:26.162875    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:26.163022    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:26.180110    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:26.180208    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:26.198644    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:26.198732    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:26.213131    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:26.213213    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:26.223846    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:26.223926    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:26.235007    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:26.235082    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:26.245639    9094 logs.go:276] 0 containers: []
	W0920 10:51:26.245651    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:26.245726    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:26.256304    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:26.256322    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:26.256328    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:26.271398    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:26.271409    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:26.283473    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:26.283484    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:26.295389    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:26.295399    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:26.334293    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:26.334302    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:26.339106    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:26.339115    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:26.376633    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:26.376646    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:26.399727    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:26.399734    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:26.435099    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:26.435110    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:26.449451    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:26.449465    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:26.460925    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:26.460939    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:26.474696    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:26.474707    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:26.490753    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:26.490767    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:26.502750    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:26.502767    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:26.517234    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:26.517248    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:26.534381    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:26.534394    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:29.050057    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:35.652805    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:35.653395    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:34.051053    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:34.051232    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:34.062722    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:34.062813    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:34.073627    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:34.073709    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:34.084911    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:34.084998    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:34.095748    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:34.095835    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:34.106761    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:34.106845    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:34.117390    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:34.117475    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:34.131139    9094 logs.go:276] 0 containers: []
	W0920 10:51:34.131150    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:34.131219    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:34.145681    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:34.145697    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:34.145702    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:34.157016    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:34.157026    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:34.168512    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:34.168521    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:34.183099    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:34.183114    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:34.195098    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:34.195110    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:34.211398    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:34.211408    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:34.225504    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:34.225518    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:34.240115    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:34.240126    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:34.258640    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:34.258655    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:34.274153    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:34.274169    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:34.288046    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:34.288058    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:34.327320    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:34.327333    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:34.365632    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:34.365644    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:34.388576    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:34.388583    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:34.400296    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:34.400311    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:34.404576    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:34.404582    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:35.689749    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:35.689917    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:35.710177    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:35.710285    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:35.725328    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:35.725424    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:35.737775    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:35.737860    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:35.748460    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:35.748537    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:35.762625    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:35.762701    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:35.772797    8964 logs.go:276] 0 containers: []
	W0920 10:51:35.772811    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:35.772886    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:35.782782    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:35.782797    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:35.782803    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:35.798409    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:35.798423    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:35.810889    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:35.810905    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:35.822816    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:35.822827    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:35.840663    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:35.840676    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:35.852493    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:35.852504    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:35.875406    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:35.875414    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:35.908492    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:35.908503    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:35.912726    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:35.912734    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:35.924793    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:35.924803    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:35.939521    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:35.939531    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:35.951106    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:35.951120    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:35.986586    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:35.986602    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:38.502578    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:36.947218    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:43.504913    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:43.505109    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:43.519130    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:43.519218    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:43.530724    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:43.530814    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:43.545490    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:43.545575    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:43.556214    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:43.556295    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:43.566483    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:43.566569    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:43.576972    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:43.577052    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:43.587219    8964 logs.go:276] 0 containers: []
	W0920 10:51:43.587229    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:43.587296    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:43.597337    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:43.597355    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:43.597361    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:43.630864    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:43.630873    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:43.667286    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:43.667302    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:43.681826    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:43.681842    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:43.701484    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:43.701500    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:43.713127    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:43.713141    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:43.724971    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:43.724979    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:43.747817    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:43.747826    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:43.760935    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:43.760952    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:43.765103    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:43.765111    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:43.779241    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:43.779251    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:43.794315    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:43.794331    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:43.812163    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:43.812177    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:41.948079    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:41.948348    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:41.968336    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:41.968443    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:41.981844    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:41.981932    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:41.995562    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:41.995643    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:42.006622    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:42.006710    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:42.018278    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:42.018360    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:42.029103    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:42.029181    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:42.039864    9094 logs.go:276] 0 containers: []
	W0920 10:51:42.039877    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:42.039944    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:42.052626    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:42.052644    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:42.052649    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:42.070776    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:42.070786    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:42.088943    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:42.088954    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:42.100698    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:42.100709    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:42.114216    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:42.114226    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:42.126898    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:42.126912    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:42.164393    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:42.164407    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:42.178405    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:42.178413    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:42.189846    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:42.189858    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:42.213681    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:42.213706    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:42.233006    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:42.233020    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:42.271360    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:42.271367    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:42.275297    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:42.275303    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:42.289419    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:42.289429    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:42.307701    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:42.307715    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:42.342182    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:42.342197    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:44.857213    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:46.324508    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:49.859809    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:49.860048    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:49.879157    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:49.879278    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:49.896472    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:49.896567    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:49.909431    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:49.909512    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:49.919661    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:49.919736    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:49.930384    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:49.930461    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:49.941566    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:49.941646    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:49.951243    9094 logs.go:276] 0 containers: []
	W0920 10:51:49.951255    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:49.951328    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:49.964304    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:49.964321    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:49.964328    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:49.982126    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:49.982141    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:49.993999    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:49.994012    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:50.008516    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:50.008527    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:50.022337    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:50.022348    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:50.063082    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:50.063102    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:50.101385    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:50.101400    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:50.116397    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:50.116407    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:50.139884    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:50.139910    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:50.144127    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:50.144133    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:50.158496    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:50.158506    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:50.172797    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:50.172805    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:50.183638    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:50.183649    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:50.195505    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:50.195514    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:50.207358    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:50.207373    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:50.219234    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:50.219245    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:51.327186    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:51.327510    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:51.360616    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:51.360765    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:51.378625    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:51.378765    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:51.392527    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:51.392612    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:51.407582    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:51.407659    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:51.418857    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:51.418939    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:51.430128    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:51.430207    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:51.440442    8964 logs.go:276] 0 containers: []
	W0920 10:51:51.440453    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:51.440517    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:51.451550    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:51.451566    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:51.451572    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:51.464115    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:51.464125    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:51.482166    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:51.482181    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:51.507565    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:51.507573    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:51.541972    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:51.541980    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:51.578896    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:51.578908    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:51.593916    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:51.593927    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:51.608753    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:51.608765    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:51.620263    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:51.620274    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:51.632522    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:51.632538    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:51.636815    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:51.636822    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:51.654257    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:51.654268    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:51.666891    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:51.666902    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:54.181478    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:52.756321    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:59.183742    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:59.183914    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:59.197429    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:51:59.197503    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:59.210124    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:51:59.210211    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:59.221520    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:51:59.221607    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:59.232669    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:51:59.232741    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:59.243702    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:51:59.243792    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:59.254694    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:51:59.254775    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:59.264997    8964 logs.go:276] 0 containers: []
	W0920 10:51:59.265009    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:59.265080    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:59.276350    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:51:59.276365    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:51:59.276370    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:51:59.288431    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:51:59.288441    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:51:59.304796    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:59.304808    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:59.340609    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:51:59.340622    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:51:59.355726    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:51:59.355738    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:51:59.367891    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:51:59.367903    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:51:59.382674    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:51:59.382684    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:51:59.401280    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:51:59.401290    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:51:59.413591    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:59.413601    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:59.438822    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:51:59.438831    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:59.451191    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:59.451204    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:59.487349    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:59.487364    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:59.491969    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:51:59.491975    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:51:57.756787    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:57.757310    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:57.793279    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:57.793434    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:57.813035    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:57.813145    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:57.827646    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:57.827744    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:57.839997    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:57.840083    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:57.854240    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:57.854324    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:57.865035    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:57.865118    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:57.884386    9094 logs.go:276] 0 containers: []
	W0920 10:51:57.884398    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:57.884471    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:57.894941    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:57.894958    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:57.894964    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:57.910432    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:57.910443    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:57.924570    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:57.924581    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:57.939210    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:57.939221    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:57.953231    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:57.953240    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:57.966521    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:57.966533    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:58.004103    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:58.004117    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:58.016578    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:58.016589    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:58.030758    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:58.030768    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:58.046313    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:58.046323    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:58.070950    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:58.070960    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:58.083357    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:58.083369    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:58.095140    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:58.095153    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:58.133564    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:58.133576    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:58.150835    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:58.150846    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:58.189415    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:58.189425    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:00.695604    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:02.007756    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:05.697955    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:05.698371    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:07.010357    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:07.010503    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:07.023199    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:07.023285    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:07.034553    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:07.034640    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:07.045217    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:52:07.045302    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:07.056746    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:07.056824    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:07.067040    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:07.067126    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:07.077173    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:07.077253    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:07.092094    8964 logs.go:276] 0 containers: []
	W0920 10:52:07.092105    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:07.092175    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:07.102756    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:07.102772    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:07.102778    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:07.136351    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:07.136360    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:07.140921    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:07.140930    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:07.180224    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:07.180234    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:07.194009    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:07.194020    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:07.209052    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:07.209063    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:07.228997    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:07.229013    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:07.243853    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:07.243864    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:07.255428    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:07.255445    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:07.272348    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:07.272358    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:07.283491    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:07.283499    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:07.308654    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:07.308661    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:07.324927    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:07.324944    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:09.837148    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:05.738725    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:05.738888    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:05.759938    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:05.760060    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:05.775586    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:05.775682    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:05.788415    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:05.788502    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:05.803707    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:05.803788    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:05.820853    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:05.820939    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:05.831041    9094 logs.go:276] 0 containers: []
	W0920 10:52:05.831054    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:05.831123    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:05.842006    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:05.842023    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:05.842029    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:05.880448    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:05.880458    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:05.897421    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:05.897436    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:05.927450    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:05.927466    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:05.941554    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:05.941564    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:05.946040    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:05.946047    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:05.986648    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:05.986663    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:06.001365    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:06.001381    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:06.040956    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:06.040973    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:06.052982    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:06.052992    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:06.067255    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:06.067266    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:06.079672    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:06.079683    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:06.097334    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:06.097345    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:06.109006    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:06.109017    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:06.124161    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:06.124177    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:06.144780    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:06.144791    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:08.659274    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:14.839550    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:14.839869    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:14.865858    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:14.865998    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:14.889769    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:14.889865    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:14.902675    8964 logs.go:276] 2 containers: [2200b92078db 39603ebf59b8]
	I0920 10:52:14.902762    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:14.913313    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:14.913398    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:14.924686    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:14.924772    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:14.934745    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:14.934825    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:14.944649    8964 logs.go:276] 0 containers: []
	W0920 10:52:14.944661    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:14.944732    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:14.956628    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:14.956644    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:14.956650    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:14.971470    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:14.971479    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:14.983241    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:14.983252    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:14.994799    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:14.994810    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:15.027581    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:15.027588    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:15.031961    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:15.031967    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:15.067290    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:15.067300    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:15.081630    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:15.081641    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:15.096294    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:15.096306    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:15.109727    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:15.109743    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:15.121612    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:15.121624    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:15.133742    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:15.133757    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:15.153295    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:15.153311    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:13.661962    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:13.662184    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:13.674861    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:13.674953    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:13.685889    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:13.685964    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:13.696066    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:13.696143    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:13.706594    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:13.706672    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:13.721882    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:13.721967    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:13.732641    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:13.732714    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:13.742544    9094 logs.go:276] 0 containers: []
	W0920 10:52:13.742555    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:13.742619    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:13.756937    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:13.756955    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:13.756961    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:13.796592    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:13.796613    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:13.807958    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:13.807969    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:13.822170    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:13.822181    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:13.846171    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:13.846183    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:13.882104    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:13.882114    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:13.905344    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:13.905350    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:13.916858    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:13.916869    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:13.920796    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:13.920802    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:13.934621    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:13.934632    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:13.953861    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:13.953877    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:13.967577    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:13.967591    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:13.980098    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:13.980110    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:13.998359    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:13.998373    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:14.036310    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:14.036324    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:14.050458    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:14.050471    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:17.677878    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:16.562192    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:22.680252    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:22.680436    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:22.700127    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:22.700222    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:22.710962    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:22.711046    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:22.721934    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:52:22.722011    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:22.732317    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:22.732399    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:22.742215    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:22.742296    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:22.755157    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:22.755244    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:22.765198    8964 logs.go:276] 0 containers: []
	W0920 10:52:22.765209    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:22.765272    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:22.775732    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:22.775750    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:22.775756    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:22.793810    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:22.793822    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:22.805902    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:22.805914    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:22.810526    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:22.810533    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:22.876526    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:22.876536    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:22.897292    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:22.897303    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:22.909628    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:22.909639    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:22.921291    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:22.921300    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:22.935941    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:22.935952    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:22.948122    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:22.948138    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:22.964532    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:22.964541    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:22.999107    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:52:22.999115    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:52:23.010319    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:23.010331    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:23.024592    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:52:23.024609    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:52:23.044705    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:23.044717    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:25.572130    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:21.564655    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:21.565012    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:21.593923    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:21.594063    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:21.611596    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:21.611705    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:21.625755    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:21.625848    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:21.637346    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:21.637442    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:21.647997    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:21.648082    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:21.658804    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:21.658880    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:21.668826    9094 logs.go:276] 0 containers: []
	W0920 10:52:21.668838    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:21.668910    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:21.688930    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:21.688950    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:21.688955    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:21.707816    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:21.707826    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:21.721805    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:21.721820    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:21.744977    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:21.744984    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:21.756900    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:21.756915    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:21.761401    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:21.761408    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:21.799736    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:21.799747    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:21.813048    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:21.813061    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:21.831143    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:21.831154    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:21.867938    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:21.867947    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:21.882157    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:21.882166    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:21.893634    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:21.893646    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:21.909627    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:21.909638    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:21.943674    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:21.943685    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:21.957915    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:21.957926    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:21.970184    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:21.970196    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:24.487990    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:30.574503    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:30.574671    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:30.587223    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:30.587297    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:30.597875    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:30.597960    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:30.608816    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:52:30.608905    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:30.619680    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:30.619760    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:30.630176    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:30.630262    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:30.640626    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:30.640701    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:30.651229    8964 logs.go:276] 0 containers: []
	W0920 10:52:30.651243    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:30.651317    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:30.661207    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:30.661225    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:52:30.661231    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:52:30.672326    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:30.672338    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:29.490341    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:29.490552    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:29.512509    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:29.512592    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:29.523126    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:29.523207    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:29.533968    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:29.534050    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:29.544508    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:29.544592    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:29.555584    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:29.555666    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:29.566170    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:29.566259    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:29.576919    9094 logs.go:276] 0 containers: []
	W0920 10:52:29.576930    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:29.577001    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:29.587231    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:29.587247    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:29.587253    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:29.623855    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:29.623865    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:29.637660    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:29.637673    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:29.660622    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:29.660628    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:29.664524    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:29.664530    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:29.698078    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:29.698090    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:29.716743    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:29.716753    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:29.731052    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:29.731062    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:29.742779    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:29.742789    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:29.761423    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:29.761434    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:29.776679    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:29.776689    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:29.794113    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:29.794125    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:29.832666    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:29.832676    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:29.848383    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:29.848398    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:29.860131    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:29.860141    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:29.873663    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:29.873673    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:30.698188    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:30.698199    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:30.709841    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:30.709853    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:30.744814    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:30.744824    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:30.782144    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:30.782154    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:30.800911    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:30.800921    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:30.813141    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:30.813153    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:30.831049    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:30.831059    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:30.835442    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:52:30.835451    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:52:30.846387    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:30.846399    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:30.857747    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:30.857758    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:30.869521    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:30.869535    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:30.884147    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:30.884162    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:30.896328    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:30.896340    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:33.413572    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:32.387459    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:38.413006    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:38.413460    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:38.446511    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:38.446686    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:38.469786    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:38.469898    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:38.484680    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:52:38.484770    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:38.501291    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:38.501385    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:38.512151    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:38.512233    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:38.522888    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:38.522982    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:38.533284    8964 logs.go:276] 0 containers: []
	W0920 10:52:38.533297    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:38.533370    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:38.544318    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:38.544334    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:38.544340    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:38.559065    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:38.559076    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:38.573463    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:38.573476    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:38.595629    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:38.595642    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:38.630501    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:38.630508    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:38.634940    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:38.634948    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:38.648802    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:52:38.648813    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:52:38.660326    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:38.660336    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:38.674944    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:38.674955    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:38.710805    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:38.710819    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:38.722761    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:38.722778    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:38.734723    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:38.734734    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:38.760063    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:52:38.760075    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:52:38.771826    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:38.771836    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:38.787258    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:38.787269    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:37.388012    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:37.388466    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:37.426284    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:37.426415    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:37.444211    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:37.444315    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:37.457925    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:37.458006    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:37.473217    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:37.473300    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:37.484034    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:37.484118    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:37.494837    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:37.494924    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:37.510816    9094 logs.go:276] 0 containers: []
	W0920 10:52:37.510830    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:37.510901    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:37.523437    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:37.523456    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:37.523462    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:37.538464    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:37.538478    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:37.550565    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:37.550576    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:37.586888    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:37.586896    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:37.624852    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:37.624862    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:37.636228    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:37.636240    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:37.663980    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:37.663995    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:37.678381    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:37.678392    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:37.693246    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:37.693256    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:37.705377    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:37.705388    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:37.729215    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:37.729222    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:37.740881    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:37.740891    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:37.745315    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:37.745322    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:37.780551    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:37.780562    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:37.795184    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:37.795194    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:37.809714    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:37.809724    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:40.321775    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:41.300165    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:45.320593    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:45.321044    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:45.352527    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:45.352693    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:45.370913    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:45.371024    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:45.384711    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:45.384804    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:45.396379    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:45.396460    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:45.406917    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:45.406998    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:45.417754    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:45.417835    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:45.428471    9094 logs.go:276] 0 containers: []
	W0920 10:52:45.428482    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:45.428543    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:45.438613    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:45.438631    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:45.438637    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:45.481490    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:45.481508    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:45.499169    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:45.499185    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:45.510302    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:45.510316    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:45.527610    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:45.527626    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:45.552177    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:45.552188    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:45.556343    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:45.556350    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:45.569979    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:45.569993    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:45.581775    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:45.581788    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:45.600260    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:45.600272    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:45.612361    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:45.612371    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:45.650988    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:45.650996    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:45.669926    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:45.669937    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:45.681380    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:45.681394    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:45.694102    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:45.694115    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:45.729164    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:45.729175    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:46.300480    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:46.300668    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:46.320086    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:46.320185    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:46.331718    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:46.331796    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:46.346176    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:52:46.346263    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:46.357190    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:46.357273    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:46.367167    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:46.367247    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:46.377441    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:46.377529    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:46.387653    8964 logs.go:276] 0 containers: []
	W0920 10:52:46.387665    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:46.387741    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:46.398080    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:46.398098    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:52:46.398104    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:52:46.411104    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:46.411121    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:46.422318    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:46.422330    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:46.436727    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:46.436739    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:46.454552    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:46.454563    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:46.466105    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:46.466116    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:46.477353    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:46.477369    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:46.482245    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:46.482253    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:46.496066    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:52:46.496077    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:52:46.507698    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:46.507713    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:46.532225    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:46.532233    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:46.567083    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:46.567098    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:46.602603    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:46.602619    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:46.617530    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:46.617542    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:46.633037    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:46.633049    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:49.146269    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:48.253674    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:54.145558    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:54.145761    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:54.165667    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:52:54.165771    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:54.179390    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:52:54.179468    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:54.191188    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:52:54.191275    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:54.202655    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:52:54.202739    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:54.213181    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:52:54.213262    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:54.223579    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:52:54.223663    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:54.233847    8964 logs.go:276] 0 containers: []
	W0920 10:52:54.233857    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:54.233920    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:54.244860    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:52:54.244877    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:52:54.244883    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:52:54.257369    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:52:54.257379    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:52:54.271914    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:52:54.271924    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:52:54.289549    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:52:54.289564    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:52:54.301421    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:54.301436    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:54.307663    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:52:54.307674    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:52:54.324213    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:52:54.324226    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:52:54.338576    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:52:54.338593    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:52:54.350319    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:52:54.350330    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:52:54.361991    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:52:54.362007    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:52:54.373079    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:52:54.373090    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:52:54.384871    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:54.384882    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:54.419305    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:54.419314    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:54.454611    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:54.454626    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:54.480136    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:52:54.480145    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:53.254922    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:53.255294    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:53.302387    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:53.302485    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:53.320028    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:53.320108    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:53.330941    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:53.331017    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:53.341571    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:53.341662    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:53.352533    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:53.352620    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:53.363015    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:53.363093    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:53.373140    9094 logs.go:276] 0 containers: []
	W0920 10:52:53.373151    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:53.373221    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:53.383485    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:53.383501    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:53.383507    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:53.417940    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:53.417956    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:53.432808    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:53.432822    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:53.447788    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:53.447800    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:53.459667    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:53.459677    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:53.471493    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:53.471503    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:53.475780    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:53.475788    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:53.490578    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:53.490589    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:53.509264    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:53.509273    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:53.532618    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:53.532630    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:53.570552    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:53.570564    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:53.581944    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:53.581953    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:53.593610    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:53.593621    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:53.632296    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:53.632304    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:53.643852    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:53.643862    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:53.658855    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:53.658865    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:56.993455    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:56.181707    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:01.995179    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:01.995430    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:02.013303    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:02.013411    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:02.025416    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:02.025504    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:02.037009    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:02.037093    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:02.047443    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:02.047522    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:02.058241    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:02.058316    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:02.069119    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:02.069206    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:02.085179    8964 logs.go:276] 0 containers: []
	W0920 10:53:02.085191    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:02.085265    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:02.095610    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:02.095627    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:02.095632    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:02.108755    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:02.108767    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:02.126893    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:02.126905    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:02.152154    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:02.152163    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:02.166379    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:02.166392    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:02.177585    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:02.177596    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:02.188884    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:02.188896    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:02.194141    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:02.194147    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:02.208726    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:02.208742    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:02.222374    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:02.222384    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:02.258371    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:02.258387    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:02.270786    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:02.270796    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:02.282552    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:02.282566    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:02.294450    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:02.294463    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:02.312953    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:02.312969    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:04.850046    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:01.182520    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:01.182798    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:01.203670    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:53:01.203783    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:01.218179    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:53:01.218271    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:01.230676    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:53:01.230760    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:01.241921    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:53:01.242001    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:01.252444    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:53:01.252525    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:01.262933    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:53:01.263018    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:01.273471    9094 logs.go:276] 0 containers: []
	W0920 10:53:01.273483    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:01.273557    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:01.283942    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:53:01.283958    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:01.283964    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:01.322991    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:01.323004    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:01.327110    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:53:01.327116    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:53:01.342198    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:01.342208    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:01.381899    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:53:01.381913    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:53:01.400304    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:01.400314    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:01.423241    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:53:01.423248    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:01.434805    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:53:01.434817    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:53:01.447391    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:53:01.447405    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:53:01.462138    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:53:01.462152    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:53:01.480550    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:53:01.480565    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:53:01.497047    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:53:01.497059    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:53:01.517904    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:53:01.517915    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:53:01.556265    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:53:01.556278    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:53:01.570336    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:53:01.570346    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:53:01.582610    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:53:01.582623    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:53:04.098420    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:09.851912    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:09.852157    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:09.872321    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:09.872450    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:09.887020    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:09.887110    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:09.899925    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:09.900012    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:09.910451    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:09.910529    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:09.921531    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:09.921608    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:09.932473    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:09.932544    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:09.942548    8964 logs.go:276] 0 containers: []
	W0920 10:53:09.942559    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:09.942627    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:09.953403    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:09.953420    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:09.953426    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:09.988696    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:09.988703    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:10.002726    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:10.002737    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:10.017970    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:10.017982    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:10.043341    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:10.043351    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:10.079078    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:10.079089    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:10.091138    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:10.091148    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:10.095804    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:10.095810    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:10.111785    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:10.111798    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:10.130407    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:10.130424    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:10.142617    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:10.142630    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:10.156941    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:10.156953    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:10.171341    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:10.171353    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:10.183870    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:10.183883    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:10.196367    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:10.196378    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:09.100329    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:09.100869    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:09.139483    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:53:09.139656    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:09.160285    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:53:09.160424    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:09.178313    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:53:09.178396    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:09.190491    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:53:09.190581    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:09.206638    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:53:09.206733    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:09.218539    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:53:09.218625    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:09.229574    9094 logs.go:276] 0 containers: []
	W0920 10:53:09.229584    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:09.229649    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:09.240380    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:53:09.240404    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:53:09.240409    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:53:09.255357    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:53:09.255368    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:53:09.267481    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:53:09.267491    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:53:09.279420    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:09.279430    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:09.301884    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:53:09.301891    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:53:09.316374    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:53:09.316386    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:53:09.328191    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:53:09.328201    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:53:09.340130    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:53:09.340140    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:53:09.357507    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:53:09.357518    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:53:09.372292    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:53:09.372302    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:09.385131    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:09.385141    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:09.422191    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:53:09.422203    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:53:09.461266    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:53:09.461287    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:53:09.475747    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:09.475756    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:09.480356    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:09.480362    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:09.516360    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:53:09.516374    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:53:12.714073    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:12.033764    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:17.716147    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:17.716312    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:17.731833    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:17.731932    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:17.745528    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:17.745626    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:17.756178    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:17.756257    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:17.772046    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:17.772131    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:17.787676    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:17.787760    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:17.798693    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:17.798770    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:17.808766    8964 logs.go:276] 0 containers: []
	W0920 10:53:17.808780    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:17.808869    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:17.819520    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:17.819544    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:17.819550    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:17.836536    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:17.836549    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:17.848522    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:17.848532    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:17.862915    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:17.862925    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:17.881801    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:17.881812    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:17.906481    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:17.906489    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:17.940013    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:17.940025    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:17.951843    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:17.951855    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:17.967051    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:17.967064    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:17.978717    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:17.978730    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:17.990973    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:17.990986    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:17.996186    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:17.996198    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:18.037757    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:18.037768    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:18.049098    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:18.049111    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:18.062580    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:18.062593    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:20.576382    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:17.034113    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:17.034408    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:17.060279    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:53:17.060425    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:17.078790    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:53:17.078882    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:17.091771    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:53:17.091866    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:17.102730    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:53:17.102811    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:17.112845    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:53:17.112926    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:17.123615    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:53:17.123695    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:17.133844    9094 logs.go:276] 0 containers: []
	W0920 10:53:17.133859    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:17.133926    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:17.144249    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:53:17.144265    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:17.144270    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:17.180114    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:53:17.180130    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:53:17.192066    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:17.192080    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:17.196488    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:53:17.196495    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:53:17.233288    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:53:17.233301    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:53:17.251290    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:53:17.251305    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:53:17.264867    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:17.264885    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:17.304278    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:53:17.304292    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:53:17.324607    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:53:17.324619    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:53:17.338526    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:53:17.338539    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:53:17.350276    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:53:17.350289    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:53:17.364979    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:53:17.364993    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:53:17.379181    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:53:17.379195    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:53:17.391524    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:53:17.391540    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:53:17.403161    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:17.403174    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:17.425405    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:53:17.425414    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:19.939510    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:25.578670    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:25.578826    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:25.592097    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:25.592201    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:25.603005    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:25.603084    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:25.614411    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:25.614495    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:25.625315    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:25.625401    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:25.636176    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:25.636253    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:25.646766    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:25.646842    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:25.656563    8964 logs.go:276] 0 containers: []
	W0920 10:53:25.656576    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:25.656645    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:25.666968    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:25.666984    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:25.666990    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:24.941676    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:24.941996    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:24.966123    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:53:24.966271    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:24.981929    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:53:24.982021    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:24.995030    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:53:24.995105    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:25.009905    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:53:25.009985    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:25.020577    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:53:25.020654    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:25.033371    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:53:25.033458    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:25.043270    9094 logs.go:276] 0 containers: []
	W0920 10:53:25.043286    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:25.043356    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:25.053502    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:53:25.053521    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:53:25.053525    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:53:25.067628    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:53:25.067638    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:25.080861    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:25.080874    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:25.115278    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:53:25.115287    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:53:25.129181    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:53:25.129190    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:53:25.166994    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:53:25.167005    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:53:25.179833    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:53:25.179844    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:53:25.194398    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:53:25.194409    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:53:25.205866    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:53:25.205878    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:53:25.220645    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:53:25.220658    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:53:25.238002    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:53:25.238015    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:53:25.252069    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:25.252082    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:25.275559    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:25.275567    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:25.315311    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:25.315325    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:25.320135    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:53:25.320142    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:53:25.331417    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:53:25.331427    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:53:25.708289    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:25.708305    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:25.723056    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:25.723066    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:25.744752    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:25.744763    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:25.762396    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:25.762406    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:25.788018    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:25.788028    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:25.792523    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:25.792533    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:25.803801    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:25.803814    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:25.817703    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:25.817718    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:25.828969    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:25.828980    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:25.841529    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:25.841543    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:25.852689    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:25.852701    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:25.887537    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:25.887546    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:25.899468    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:25.899484    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:25.914226    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:25.914241    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:28.426231    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:27.848751    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:32.851313    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:32.851386    9094 kubeadm.go:597] duration metric: took 4m3.805727625s to restartPrimaryControlPlane
	W0920 10:53:32.851453    9094 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:53:32.851479    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:53:33.899697    9094 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.048223041s)
	I0920 10:53:33.899772    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:53:33.904872    9094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:53:33.907950    9094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:53:33.910786    9094 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:53:33.910793    9094 kubeadm.go:157] found existing configuration files:
	
	I0920 10:53:33.910822    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/admin.conf
	I0920 10:53:33.913297    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:53:33.913328    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:53:33.915852    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/kubelet.conf
	I0920 10:53:33.918880    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:53:33.918901    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:53:33.921490    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/controller-manager.conf
	I0920 10:53:33.924037    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:53:33.924061    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:53:33.927182    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/scheduler.conf
	I0920 10:53:33.930042    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:53:33.930070    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:53:33.932557    9094 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:53:33.950603    9094 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:53:33.950699    9094 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:53:33.999288    9094 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:53:33.999343    9094 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:53:33.999401    9094 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:53:34.052546    9094 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:53:34.060674    9094 out.go:235]   - Generating certificates and keys ...
	I0920 10:53:34.060710    9094 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:53:34.060745    9094 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:53:34.060781    9094 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:53:34.060848    9094 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:53:34.060884    9094 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:53:34.060914    9094 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:53:34.060950    9094 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:53:34.060986    9094 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:53:34.061032    9094 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:53:34.061075    9094 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:53:34.061097    9094 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:53:34.061126    9094 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:53:34.086765    9094 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:53:34.174692    9094 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:53:34.244390    9094 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:53:34.427178    9094 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:53:34.458171    9094 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:53:34.458545    9094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:53:34.458576    9094 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:53:34.545176    9094 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:53:33.428406    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:33.428540    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:33.440395    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:33.440479    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:33.452309    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:33.452395    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:33.464264    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:33.464350    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:33.476431    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:33.476515    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:33.488169    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:33.488251    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:33.499725    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:33.499808    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:33.510091    8964 logs.go:276] 0 containers: []
	W0920 10:53:33.510105    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:33.510181    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:33.520651    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:33.520668    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:33.520674    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:33.536814    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:33.536825    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:33.549305    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:33.549319    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:33.585943    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:33.585964    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:33.598811    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:33.598826    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:33.612127    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:33.612141    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:33.624741    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:33.624753    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:33.629418    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:33.629430    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:33.646287    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:33.646299    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:33.661442    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:33.661453    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:33.673119    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:33.673133    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:33.709601    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:33.709615    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:33.727116    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:33.727129    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:33.740838    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:33.740854    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:33.759803    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:33.759820    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:34.549380    9094 out.go:235]   - Booting up control plane ...
	I0920 10:53:34.549426    9094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:53:34.549472    9094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:53:34.549532    9094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:53:34.549576    9094 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:53:34.549688    9094 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:53:39.051053    9094 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502493 seconds
	I0920 10:53:39.051127    9094 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:53:39.054958    9094 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:53:39.566393    9094 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:53:39.566808    9094 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-770000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:53:40.070038    9094 kubeadm.go:310] [bootstrap-token] Using token: oamarz.9okfcddbvqluxbug
	I0920 10:53:40.076675    9094 out.go:235]   - Configuring RBAC rules ...
	I0920 10:53:40.076747    9094 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:53:40.076803    9094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:53:40.078791    9094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:53:40.083068    9094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:53:40.084011    9094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:53:40.084784    9094 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:53:40.088351    9094 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:53:40.235955    9094 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:53:40.475966    9094 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:53:40.476472    9094 kubeadm.go:310] 
	I0920 10:53:40.476514    9094 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:53:40.476559    9094 kubeadm.go:310] 
	I0920 10:53:40.476702    9094 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:53:40.476733    9094 kubeadm.go:310] 
	I0920 10:53:40.476749    9094 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:53:40.476781    9094 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:53:40.476813    9094 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:53:40.476818    9094 kubeadm.go:310] 
	I0920 10:53:40.476849    9094 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:53:40.476852    9094 kubeadm.go:310] 
	I0920 10:53:40.476874    9094 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:53:40.476879    9094 kubeadm.go:310] 
	I0920 10:53:40.476905    9094 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:53:40.476954    9094 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:53:40.476999    9094 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:53:40.477004    9094 kubeadm.go:310] 
	I0920 10:53:40.477052    9094 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:53:40.477093    9094 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:53:40.477097    9094 kubeadm.go:310] 
	I0920 10:53:40.477145    9094 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oamarz.9okfcddbvqluxbug \
	I0920 10:53:40.477203    9094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:060a9df3d803721427aee4d9db182572971f8fddfdaccc18183246a007d5e636 \
	I0920 10:53:40.477214    9094 kubeadm.go:310] 	--control-plane 
	I0920 10:53:40.477219    9094 kubeadm.go:310] 
	I0920 10:53:40.477265    9094 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:53:40.477270    9094 kubeadm.go:310] 
	I0920 10:53:40.477312    9094 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oamarz.9okfcddbvqluxbug \
	I0920 10:53:40.477371    9094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:060a9df3d803721427aee4d9db182572971f8fddfdaccc18183246a007d5e636 
	I0920 10:53:40.477525    9094 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:53:40.477534    9094 cni.go:84] Creating CNI manager for ""
	I0920 10:53:40.477542    9094 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:53:40.481106    9094 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:53:40.489128    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:53:40.492068    9094 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:53:40.496725    9094 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:53:40.496810    9094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:53:40.496812    9094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-770000 minikube.k8s.io/updated_at=2024_09_20T10_53_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=stopped-upgrade-770000 minikube.k8s.io/primary=true
	I0920 10:53:40.537852    9094 ops.go:34] apiserver oom_adj: -16
	I0920 10:53:40.537866    9094 kubeadm.go:1113] duration metric: took 41.083333ms to wait for elevateKubeSystemPrivileges
	I0920 10:53:40.537872    9094 kubeadm.go:394] duration metric: took 4m11.506875208s to StartCluster
	I0920 10:53:40.537881    9094 settings.go:142] acquiring lock: {Name:mk90c7bb0a96d07865bd05b5bab2437d4acfe4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:53:40.537974    9094 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:53:40.538416    9094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/kubeconfig: {Name:mkc202c0538e947b3e0d9844748996d0c112bf36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:53:40.538631    9094 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:53:40.538639    9094 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:53:40.538700    9094 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-770000"
	I0920 10:53:40.538707    9094 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-770000"
	W0920 10:53:40.538712    9094 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:53:40.538727    9094 host.go:66] Checking if "stopped-upgrade-770000" exists ...
	I0920 10:53:40.538727    9094 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-770000"
	I0920 10:53:40.538728    9094 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:53:40.538735    9094 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-770000"
	I0920 10:53:40.543086    9094 out.go:177] * Verifying Kubernetes components...
	I0920 10:53:40.543784    9094 kapi.go:59] client config for stopped-upgrade-770000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/client.key", CAFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1026aa030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:53:40.546451    9094 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-770000"
	W0920 10:53:40.546456    9094 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:53:40.546465    9094 host.go:66] Checking if "stopped-upgrade-770000" exists ...
	I0920 10:53:40.547043    9094 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:53:40.547050    9094 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:53:40.547055    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	I0920 10:53:40.552027    9094 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:53:36.286804    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:40.556114    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:53:40.559095    9094 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:53:40.559102    9094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:53:40.559110    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	I0920 10:53:40.632899    9094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:53:40.637950    9094 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:53:40.637998    9094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:53:40.641731    9094 api_server.go:72] duration metric: took 103.087875ms to wait for apiserver process to appear ...
	I0920 10:53:40.641738    9094 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:53:40.641745    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:40.665281    9094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:53:40.682606    9094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:53:41.288957    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:41.289107    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:41.302223    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:41.302318    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:41.313233    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:41.313325    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:41.324516    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:41.324611    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:41.335614    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:41.335689    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:41.346371    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:41.346439    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:41.358283    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:41.358367    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:41.368655    8964 logs.go:276] 0 containers: []
	W0920 10:53:41.368665    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:41.368736    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:41.378877    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:41.378892    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:41.378898    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:41.392623    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:41.392632    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:41.406230    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:41.406245    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:41.418697    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:41.418710    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:41.430765    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:41.430774    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:41.450084    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:41.450099    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:41.462496    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:41.462512    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:41.497962    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:41.497973    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:41.503138    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:41.503147    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:41.526417    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:41.526427    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:41.563902    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:41.563913    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:41.575610    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:41.575622    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:41.587165    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:41.587174    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:41.601931    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:41.601941    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:41.616935    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:41.616953    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:44.141613    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:41.035835    9094 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:53:41.035848    9094 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:53:45.643813    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:45.643868    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:49.143810    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:49.143956    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:49.154641    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:49.154738    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:49.165154    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:49.165236    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:49.175429    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:49.175518    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:49.186082    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:49.186166    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:49.196534    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:49.196614    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:49.207124    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:49.207211    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:49.217939    8964 logs.go:276] 0 containers: []
	W0920 10:53:49.217953    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:49.218027    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:49.228477    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:49.228496    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:49.228502    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:49.247592    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:49.247603    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:49.259464    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:49.259476    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:49.271301    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:49.271312    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:49.286432    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:49.286443    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:49.291156    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:49.291163    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:49.302828    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:49.302840    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:49.314452    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:49.314463    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:49.326145    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:49.326156    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:49.361082    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:49.361090    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:49.396546    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:49.396559    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:49.414480    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:49.414492    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:49.426297    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:49.426308    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:49.438914    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:49.438927    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:49.456459    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:49.456470    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:50.644237    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:50.644279    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:51.982410    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:55.644659    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:55.644683    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:56.984655    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:56.984835    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:56.997157    8964 logs.go:276] 1 containers: [5d0ba1e05e07]
	I0920 10:53:56.997251    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:57.007788    8964 logs.go:276] 1 containers: [d9a87309b8aa]
	I0920 10:53:57.007873    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:57.017999    8964 logs.go:276] 4 containers: [a4e46b607ce4 3abc381c32e7 2200b92078db 39603ebf59b8]
	I0920 10:53:57.018082    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:57.028330    8964 logs.go:276] 1 containers: [583fd1cc014d]
	I0920 10:53:57.028404    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:57.038862    8964 logs.go:276] 1 containers: [2f00c7382aad]
	I0920 10:53:57.038942    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:57.049931    8964 logs.go:276] 1 containers: [822a6a6c839b]
	I0920 10:53:57.050009    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:57.060039    8964 logs.go:276] 0 containers: []
	W0920 10:53:57.060053    8964 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:57.060125    8964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:57.071467    8964 logs.go:276] 1 containers: [a91959c95631]
	I0920 10:53:57.071485    8964 logs.go:123] Gathering logs for coredns [2200b92078db] ...
	I0920 10:53:57.071491    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2200b92078db"
	I0920 10:53:57.083162    8964 logs.go:123] Gathering logs for kube-scheduler [583fd1cc014d] ...
	I0920 10:53:57.083173    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583fd1cc014d"
	I0920 10:53:57.097705    8964 logs.go:123] Gathering logs for kube-apiserver [5d0ba1e05e07] ...
	I0920 10:53:57.097721    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d0ba1e05e07"
	I0920 10:53:57.112299    8964 logs.go:123] Gathering logs for coredns [39603ebf59b8] ...
	I0920 10:53:57.112312    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39603ebf59b8"
	I0920 10:53:57.123547    8964 logs.go:123] Gathering logs for kube-proxy [2f00c7382aad] ...
	I0920 10:53:57.123560    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f00c7382aad"
	I0920 10:53:57.135111    8964 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:57.135122    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:57.169313    8964 logs.go:123] Gathering logs for etcd [d9a87309b8aa] ...
	I0920 10:53:57.169324    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a87309b8aa"
	I0920 10:53:57.183381    8964 logs.go:123] Gathering logs for kube-controller-manager [822a6a6c839b] ...
	I0920 10:53:57.183397    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 822a6a6c839b"
	I0920 10:53:57.200972    8964 logs.go:123] Gathering logs for container status ...
	I0920 10:53:57.200987    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:57.213389    8964 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:57.213401    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:57.247013    8964 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:57.247023    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:57.251318    8964 logs.go:123] Gathering logs for coredns [a4e46b607ce4] ...
	I0920 10:53:57.251326    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4e46b607ce4"
	I0920 10:53:57.263148    8964 logs.go:123] Gathering logs for coredns [3abc381c32e7] ...
	I0920 10:53:57.263161    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3abc381c32e7"
	I0920 10:53:57.275058    8964 logs.go:123] Gathering logs for storage-provisioner [a91959c95631] ...
	I0920 10:53:57.275070    8964 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a91959c95631"
	I0920 10:53:57.286183    8964 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:57.286194    8964 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:59.811486    8964 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:00.645145    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:00.645207    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:04.813090    8964 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:04.818520    8964 out.go:201] 
	W0920 10:54:04.821335    8964 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0920 10:54:04.821341    8964 out.go:270] * 
	W0920 10:54:04.821900    8964 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:54:04.836396    8964 out.go:201] 
	I0920 10:54:05.645860    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:05.645905    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:10.646722    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:10.646792    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:54:11.037999    9094 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:54:11.042327    9094 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:54:11.054264    9094 addons.go:510] duration metric: took 30.515940583s for enable addons: enabled=[storage-provisioner]
	I0920 10:54:15.647905    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:15.647947    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-09-20 17:45:12 UTC, ends at Fri 2024-09-20 17:54:20 UTC. --
	Sep 20 17:54:06 running-upgrade-097000 dockerd[3208]: time="2024-09-20T17:54:06.717198476Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f34fb0f256ccadeb428ae23c4444d89d73d8c3649c1921fb80306dcf9f8588da pid=18845 runtime=io.containerd.runc.v2
	Sep 20 17:54:06 running-upgrade-097000 dockerd[3208]: time="2024-09-20T17:54:06.757207039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 20 17:54:06 running-upgrade-097000 dockerd[3208]: time="2024-09-20T17:54:06.757528014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 20 17:54:06 running-upgrade-097000 dockerd[3208]: time="2024-09-20T17:54:06.757542179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 20 17:54:06 running-upgrade-097000 dockerd[3208]: time="2024-09-20T17:54:06.757610715Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1e92013d6033fc4087178dc96de5a29371c8edadf8f9ba1216678fc115cc4bbf pid=18895 runtime=io.containerd.runc.v2
	Sep 20 17:54:06 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:06Z" level=error msg="ContainerStats resp: {0x4000361900 linux}"
	Sep 20 17:54:07 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:07Z" level=error msg="ContainerStats resp: {0x40008d4e00 linux}"
	Sep 20 17:54:07 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:07Z" level=error msg="ContainerStats resp: {0x40007e8ac0 linux}"
	Sep 20 17:54:07 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:07Z" level=error msg="ContainerStats resp: {0x40007e8e80 linux}"
	Sep 20 17:54:07 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:07Z" level=error msg="ContainerStats resp: {0x40007e8fc0 linux}"
	Sep 20 17:54:07 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:07Z" level=error msg="ContainerStats resp: {0x40008d5d80 linux}"
	Sep 20 17:54:07 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:07Z" level=error msg="ContainerStats resp: {0x4000978340 linux}"
	Sep 20 17:54:07 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:07Z" level=error msg="ContainerStats resp: {0x40009788c0 linux}"
	Sep 20 17:54:11 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:11Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 20 17:54:16 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:16Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 20 17:54:17 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:17Z" level=error msg="ContainerStats resp: {0x4000978340 linux}"
	Sep 20 17:54:17 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:17Z" level=error msg="ContainerStats resp: {0x40009791c0 linux}"
	Sep 20 17:54:18 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:18Z" level=error msg="ContainerStats resp: {0x40009d9b80 linux}"
	Sep 20 17:54:19 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:19Z" level=error msg="ContainerStats resp: {0x4000352540 linux}"
	Sep 20 17:54:19 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:19Z" level=error msg="ContainerStats resp: {0x4000426880 linux}"
	Sep 20 17:54:19 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:19Z" level=error msg="ContainerStats resp: {0x4000352fc0 linux}"
	Sep 20 17:54:19 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:19Z" level=error msg="ContainerStats resp: {0x4000353640 linux}"
	Sep 20 17:54:19 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:19Z" level=error msg="ContainerStats resp: {0x4000427a80 linux}"
	Sep 20 17:54:19 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:19Z" level=error msg="ContainerStats resp: {0x40000bef00 linux}"
	Sep 20 17:54:19 running-upgrade-097000 cri-dockerd[3041]: time="2024-09-20T17:54:19Z" level=error msg="ContainerStats resp: {0x40000bfc80 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1e92013d6033f       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   a8c78afcc167b
	f34fb0f256cca       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   ff7369d6ccbdb
	a4e46b607ce47       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   a8c78afcc167b
	3abc381c32e7f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ff7369d6ccbdb
	2f00c7382aadb       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   11c34133184dd
	a91959c956315       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   a788461e0b879
	d9a87309b8aad       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   132c5a31cf0be
	583fd1cc014df       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   5dec23df8e237
	5d0ba1e05e070       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   06c55bb7ce9fb
	822a6a6c839b5       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   641f45c899a65
	
	
	==> coredns [1e92013d6033] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6931952057474642005.7284863800932191799. HINFO: read udp 10.244.0.3:41445->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6931952057474642005.7284863800932191799. HINFO: read udp 10.244.0.3:52875->10.0.2.3:53: i/o timeout
	
	
	==> coredns [3abc381c32e7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8892918693001528007.5256360824346649491. HINFO: read udp 10.244.0.2:54688->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8892918693001528007.5256360824346649491. HINFO: read udp 10.244.0.2:36600->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8892918693001528007.5256360824346649491. HINFO: read udp 10.244.0.2:38062->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8892918693001528007.5256360824346649491. HINFO: read udp 10.244.0.2:40517->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8892918693001528007.5256360824346649491. HINFO: read udp 10.244.0.2:48346->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a4e46b607ce4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3202380578074206794.7680143740568723189. HINFO: read udp 10.244.0.3:34504->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3202380578074206794.7680143740568723189. HINFO: read udp 10.244.0.3:43052->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3202380578074206794.7680143740568723189. HINFO: read udp 10.244.0.3:51703->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3202380578074206794.7680143740568723189. HINFO: read udp 10.244.0.3:46737->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3202380578074206794.7680143740568723189. HINFO: read udp 10.244.0.3:59113->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3202380578074206794.7680143740568723189. HINFO: read udp 10.244.0.3:53285->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3202380578074206794.7680143740568723189. HINFO: read udp 10.244.0.3:52879->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3202380578074206794.7680143740568723189. HINFO: read udp 10.244.0.3:52043->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3202380578074206794.7680143740568723189. HINFO: read udp 10.244.0.3:45824->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3202380578074206794.7680143740568723189. HINFO: read udp 10.244.0.3:51159->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f34fb0f256cc] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7596050255510227712.4027319446671505874. HINFO: read udp 10.244.0.2:47296->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7596050255510227712.4027319446671505874. HINFO: read udp 10.244.0.2:40092->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-097000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-097000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=running-upgrade-097000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T10_50_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:50:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-097000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:54:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:50:04 +0000   Fri, 20 Sep 2024 17:50:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:50:04 +0000   Fri, 20 Sep 2024 17:50:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:50:04 +0000   Fri, 20 Sep 2024 17:50:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:50:04 +0000   Fri, 20 Sep 2024 17:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-097000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 637070069d084e52b1f0dd77dc423643
	  System UUID:                637070069d084e52b1f0dd77dc423643
	  Boot ID:                    f05e0bf7-f4ad-484b-81df-c35378ae6de4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9w228                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-zps9p                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-097000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-097000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-097000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-jlzp6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-097000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m1s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-097000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-097000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-097000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-097000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-097000 event: Registered Node running-upgrade-097000 in Controller
	
	
	==> dmesg <==
	[  +2.082632] systemd-fstab-generator[873]: Ignoring "noauto" for root device
	[  +0.063542] systemd-fstab-generator[884]: Ignoring "noauto" for root device
	[  +0.063718] systemd-fstab-generator[895]: Ignoring "noauto" for root device
	[  +1.141617] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.068545] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.064349] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +2.862301] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[  +9.150498] systemd-fstab-generator[1925]: Ignoring "noauto" for root device
	[  +2.786373] systemd-fstab-generator[2209]: Ignoring "noauto" for root device
	[  +0.151492] systemd-fstab-generator[2242]: Ignoring "noauto" for root device
	[  +0.098829] systemd-fstab-generator[2253]: Ignoring "noauto" for root device
	[  +0.095580] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +3.390055] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.207041] systemd-fstab-generator[2997]: Ignoring "noauto" for root device
	[  +0.088699] systemd-fstab-generator[3009]: Ignoring "noauto" for root device
	[  +0.055049] systemd-fstab-generator[3020]: Ignoring "noauto" for root device
	[  +0.071394] systemd-fstab-generator[3034]: Ignoring "noauto" for root device
	[  +2.277712] systemd-fstab-generator[3188]: Ignoring "noauto" for root device
	[  +3.399084] systemd-fstab-generator[3611]: Ignoring "noauto" for root device
	[  +1.331581] systemd-fstab-generator[3903]: Ignoring "noauto" for root device
	[Sep20 17:46] kauditd_printk_skb: 68 callbacks suppressed
	[ +39.540114] kauditd_printk_skb: 21 callbacks suppressed
	[Sep20 17:49] systemd-fstab-generator[11920]: Ignoring "noauto" for root device
	[Sep20 17:50] systemd-fstab-generator[12514]: Ignoring "noauto" for root device
	[  +0.465945] systemd-fstab-generator[12649]: Ignoring "noauto" for root device
	
	
	==> etcd [d9a87309b8aa] <==
	{"level":"info","ts":"2024-09-20T17:49:59.617Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T17:49:59.617Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T17:49:59.617Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-20T17:49:59.619Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-20T17:49:59.619Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-20T17:49:59.618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-20T17:49:59.619Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-20T17:50:00.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T17:50:00.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T17:50:00.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-20T17:50:00.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T17:50:00.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-20T17:50:00.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-20T17:50:00.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-20T17:50:00.292Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-097000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:50:00.293Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:50:00.293Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:50:00.293Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T17:50:00.293Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:50:00.294Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:50:00.294Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:50:00.294Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:50:00.294Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:50:00.294Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:50:00.294Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 17:54:21 up 9 min,  0 users,  load average: 0.45, 0.36, 0.20
	Linux running-upgrade-097000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [5d0ba1e05e07] <==
	I0920 17:50:01.471343       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0920 17:50:01.481726       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0920 17:50:01.481737       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 17:50:01.482787       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0920 17:50:01.490895       1 cache.go:39] Caches are synced for autoregister controller
	I0920 17:50:01.493040       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0920 17:50:01.501667       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0920 17:50:02.217549       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0920 17:50:02.394695       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0920 17:50:02.396151       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0920 17:50:02.396161       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 17:50:02.513949       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 17:50:02.525099       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 17:50:02.546937       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0920 17:50:02.548850       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0920 17:50:02.549268       1 controller.go:611] quota admission added evaluator for: endpoints
	I0920 17:50:02.550592       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 17:50:03.511713       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0920 17:50:03.992262       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0920 17:50:03.997110       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0920 17:50:04.009475       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0920 17:50:04.036306       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 17:50:18.170393       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0920 17:50:18.369491       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0920 17:50:19.158987       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [822a6a6c839b] <==
	I0920 17:50:17.541465       1 shared_informer.go:262] Caches are synced for cronjob
	I0920 17:50:17.545775       1 shared_informer.go:262] Caches are synced for persistent volume
	I0920 17:50:17.568155       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0920 17:50:17.568204       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0920 17:50:17.568173       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0920 17:50:17.568179       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0920 17:50:17.568185       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0920 17:50:17.568321       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0920 17:50:17.568189       1 shared_informer.go:262] Caches are synced for HPA
	I0920 17:50:17.569322       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0920 17:50:17.569361       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0920 17:50:17.569362       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0920 17:50:17.671221       1 shared_informer.go:262] Caches are synced for attach detach
	I0920 17:50:17.717373       1 shared_informer.go:262] Caches are synced for disruption
	I0920 17:50:17.717438       1 disruption.go:371] Sending events to api server.
	I0920 17:50:17.725582       1 shared_informer.go:262] Caches are synced for resource quota
	I0920 17:50:17.747539       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0920 17:50:17.773789       1 shared_informer.go:262] Caches are synced for resource quota
	I0920 17:50:18.172008       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0920 17:50:18.186824       1 shared_informer.go:262] Caches are synced for garbage collector
	I0920 17:50:18.269628       1 shared_informer.go:262] Caches are synced for garbage collector
	I0920 17:50:18.269642       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0920 17:50:18.371970       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jlzp6"
	I0920 17:50:18.571402       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zps9p"
	I0920 17:50:18.574925       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-9w228"
	
	
	==> kube-proxy [2f00c7382aad] <==
	I0920 17:50:19.148500       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0920 17:50:19.148524       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0920 17:50:19.148533       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0920 17:50:19.157154       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0920 17:50:19.157165       1 server_others.go:206] "Using iptables Proxier"
	I0920 17:50:19.157179       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0920 17:50:19.157290       1 server.go:661] "Version info" version="v1.24.1"
	I0920 17:50:19.157294       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:50:19.157764       1 config.go:317] "Starting service config controller"
	I0920 17:50:19.157767       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0920 17:50:19.157774       1 config.go:226] "Starting endpoint slice config controller"
	I0920 17:50:19.157775       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0920 17:50:19.157956       1 config.go:444] "Starting node config controller"
	I0920 17:50:19.157958       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0920 17:50:19.258207       1 shared_informer.go:262] Caches are synced for service config
	I0920 17:50:19.258234       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0920 17:50:19.258343       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [583fd1cc014d] <==
	W0920 17:50:01.438940       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 17:50:01.438962       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0920 17:50:01.439055       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:50:01.439078       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0920 17:50:01.439125       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:50:01.439157       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0920 17:50:01.439189       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:50:01.439313       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0920 17:50:01.439201       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:50:01.439382       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0920 17:50:01.439394       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:50:01.439467       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0920 17:50:01.439409       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:50:01.439497       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0920 17:50:01.439421       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:50:01.439549       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0920 17:50:01.439573       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:50:01.439593       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0920 17:50:02.306725       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:50:02.306742       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0920 17:50:02.389491       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:50:02.389571       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0920 17:50:02.405116       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:50:02.405132       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0920 17:50:02.930782       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-09-20 17:45:12 UTC, ends at Fri 2024-09-20 17:54:21 UTC. --
	Sep 20 17:50:05 running-upgrade-097000 kubelet[12520]: E0920 17:50:05.835176   12520 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-097000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-097000"
	Sep 20 17:50:06 running-upgrade-097000 kubelet[12520]: E0920 17:50:06.029518   12520 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-097000\" already exists" pod="kube-system/etcd-running-upgrade-097000"
	Sep 20 17:50:06 running-upgrade-097000 kubelet[12520]: I0920 17:50:06.221973   12520 request.go:601] Waited for 1.123219538s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 20 17:50:06 running-upgrade-097000 kubelet[12520]: E0920 17:50:06.225643   12520 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-097000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-097000"
	Sep 20 17:50:17 running-upgrade-097000 kubelet[12520]: I0920 17:50:17.531790   12520 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:50:17 running-upgrade-097000 kubelet[12520]: I0920 17:50:17.594469   12520 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 17:50:17 running-upgrade-097000 kubelet[12520]: I0920 17:50:17.594691   12520 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvsbf\" (UniqueName: \"kubernetes.io/projected/3933122d-3777-490e-b079-7a8c5d58c67e-kube-api-access-lvsbf\") pod \"storage-provisioner\" (UID: \"3933122d-3777-490e-b079-7a8c5d58c67e\") " pod="kube-system/storage-provisioner"
	Sep 20 17:50:17 running-upgrade-097000 kubelet[12520]: I0920 17:50:17.594707   12520 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3933122d-3777-490e-b079-7a8c5d58c67e-tmp\") pod \"storage-provisioner\" (UID: \"3933122d-3777-490e-b079-7a8c5d58c67e\") " pod="kube-system/storage-provisioner"
	Sep 20 17:50:17 running-upgrade-097000 kubelet[12520]: I0920 17:50:17.594943   12520 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 17:50:17 running-upgrade-097000 kubelet[12520]: E0920 17:50:17.698435   12520 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 20 17:50:17 running-upgrade-097000 kubelet[12520]: E0920 17:50:17.698455   12520 projected.go:192] Error preparing data for projected volume kube-api-access-lvsbf for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 20 17:50:17 running-upgrade-097000 kubelet[12520]: E0920 17:50:17.698600   12520 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/3933122d-3777-490e-b079-7a8c5d58c67e-kube-api-access-lvsbf podName:3933122d-3777-490e-b079-7a8c5d58c67e nodeName:}" failed. No retries permitted until 2024-09-20 17:50:18.198481151 +0000 UTC m=+14.215837938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lvsbf" (UniqueName: "kubernetes.io/projected/3933122d-3777-490e-b079-7a8c5d58c67e-kube-api-access-lvsbf") pod "storage-provisioner" (UID: "3933122d-3777-490e-b079-7a8c5d58c67e") : configmap "kube-root-ca.crt" not found
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.374672   12520 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.500062   12520 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vgnf\" (UniqueName: \"kubernetes.io/projected/1664b4b4-f834-4375-aa3f-de60551a69e9-kube-api-access-9vgnf\") pod \"kube-proxy-jlzp6\" (UID: \"1664b4b4-f834-4375-aa3f-de60551a69e9\") " pod="kube-system/kube-proxy-jlzp6"
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.500082   12520 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1664b4b4-f834-4375-aa3f-de60551a69e9-xtables-lock\") pod \"kube-proxy-jlzp6\" (UID: \"1664b4b4-f834-4375-aa3f-de60551a69e9\") " pod="kube-system/kube-proxy-jlzp6"
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.500092   12520 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1664b4b4-f834-4375-aa3f-de60551a69e9-lib-modules\") pod \"kube-proxy-jlzp6\" (UID: \"1664b4b4-f834-4375-aa3f-de60551a69e9\") " pod="kube-system/kube-proxy-jlzp6"
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.500106   12520 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1664b4b4-f834-4375-aa3f-de60551a69e9-kube-proxy\") pod \"kube-proxy-jlzp6\" (UID: \"1664b4b4-f834-4375-aa3f-de60551a69e9\") " pod="kube-system/kube-proxy-jlzp6"
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.577159   12520 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.580963   12520 topology_manager.go:200] "Topology Admit Handler"
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.701257   12520 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wds6j\" (UniqueName: \"kubernetes.io/projected/c9a579cd-e843-473b-9aa7-2db5200c28ba-kube-api-access-wds6j\") pod \"coredns-6d4b75cb6d-9w228\" (UID: \"c9a579cd-e843-473b-9aa7-2db5200c28ba\") " pod="kube-system/coredns-6d4b75cb6d-9w228"
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.701286   12520 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cwzb\" (UniqueName: \"kubernetes.io/projected/06766c0b-dfbd-4965-96f5-6c250adc1acb-kube-api-access-8cwzb\") pod \"coredns-6d4b75cb6d-zps9p\" (UID: \"06766c0b-dfbd-4965-96f5-6c250adc1acb\") " pod="kube-system/coredns-6d4b75cb6d-zps9p"
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.701297   12520 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06766c0b-dfbd-4965-96f5-6c250adc1acb-config-volume\") pod \"coredns-6d4b75cb6d-zps9p\" (UID: \"06766c0b-dfbd-4965-96f5-6c250adc1acb\") " pod="kube-system/coredns-6d4b75cb6d-zps9p"
	Sep 20 17:50:18 running-upgrade-097000 kubelet[12520]: I0920 17:50:18.701309   12520 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9a579cd-e843-473b-9aa7-2db5200c28ba-config-volume\") pod \"coredns-6d4b75cb6d-9w228\" (UID: \"c9a579cd-e843-473b-9aa7-2db5200c28ba\") " pod="kube-system/coredns-6d4b75cb6d-9w228"
	Sep 20 17:54:06 running-upgrade-097000 kubelet[12520]: I0920 17:54:06.760820   12520 scope.go:110] "RemoveContainer" containerID="39603ebf59b8254e05534d88053c504b995e7862e882051c8182829b2d50c41c"
	Sep 20 17:54:06 running-upgrade-097000 kubelet[12520]: I0920 17:54:06.803331   12520 scope.go:110] "RemoveContainer" containerID="2200b92078dbd3ba71f75db3d502514410fd109e4bc215d6fe1e20715a7fb8ba"
	
	
	==> storage-provisioner [a91959c95631] <==
	I0920 17:50:18.616519       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:50:18.620475       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:50:18.620491       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:50:18.623421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:50:18.623465       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b6ecfc3-e8d8-43ba-9131-27049cb0b9ce", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-097000_8cff0571-93a0-4548-86f9-cf7dacd91893 became leader
	I0920 17:50:18.623558       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-097000_8cff0571-93a0-4548-86f9-cf7dacd91893!
	I0920 17:50:18.725644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-097000_8cff0571-93a0-4548-86f9-cf7dacd91893!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-097000 -n running-upgrade-097000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-097000 -n running-upgrade-097000: exit status 2 (15.692316542s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-097000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-097000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-097000
--- FAIL: TestRunningBinaryUpgrade (589.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-279000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-279000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.939258291s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-279000" primary control-plane node in "kubernetes-upgrade-279000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-279000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:47:48.172515    9025 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:47:48.172637    9025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:48.172641    9025 out.go:358] Setting ErrFile to fd 2...
	I0920 10:47:48.172650    9025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:47:48.172789    9025 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:47:48.173857    9025 out.go:352] Setting JSON to false
	I0920 10:47:48.190305    9025 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6439,"bootTime":1726848029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:47:48.190373    9025 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:47:48.196570    9025 out.go:177] * [kubernetes-upgrade-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:47:48.204305    9025 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:47:48.204336    9025 notify.go:220] Checking for updates...
	I0920 10:47:48.208213    9025 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:47:48.211275    9025 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:47:48.214261    9025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:47:48.217321    9025 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:47:48.220296    9025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:47:48.223604    9025 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:47:48.223668    9025 config.go:182] Loaded profile config "running-upgrade-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:47:48.223716    9025 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:47:48.228222    9025 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:47:48.235360    9025 start.go:297] selected driver: qemu2
	I0920 10:47:48.235365    9025 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:47:48.235374    9025 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:47:48.237422    9025 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:47:48.240249    9025 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:47:48.243331    9025 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:47:48.243352    9025 cni.go:84] Creating CNI manager for ""
	I0920 10:47:48.243381    9025 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:47:48.243414    9025 start.go:340] cluster config:
	{Name:kubernetes-upgrade-279000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-279000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:47:48.246679    9025 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:47:48.254233    9025 out.go:177] * Starting "kubernetes-upgrade-279000" primary control-plane node in "kubernetes-upgrade-279000" cluster
	I0920 10:47:48.258289    9025 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:47:48.258302    9025 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:47:48.258313    9025 cache.go:56] Caching tarball of preloaded images
	I0920 10:47:48.258374    9025 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:47:48.258379    9025 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:47:48.258444    9025 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/kubernetes-upgrade-279000/config.json ...
	I0920 10:47:48.258458    9025 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/kubernetes-upgrade-279000/config.json: {Name:mkd787c6e7dbeb114ee78c8c07bb781ac4be8f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:47:48.258774    9025 start.go:360] acquireMachinesLock for kubernetes-upgrade-279000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:47:48.258803    9025 start.go:364] duration metric: took 23.708µs to acquireMachinesLock for "kubernetes-upgrade-279000"
	I0920 10:47:48.258814    9025 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-279000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-279000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:47:48.258838    9025 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:47:48.262284    9025 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:47:48.278106    9025 start.go:159] libmachine.API.Create for "kubernetes-upgrade-279000" (driver="qemu2")
	I0920 10:47:48.278145    9025 client.go:168] LocalClient.Create starting
	I0920 10:47:48.278205    9025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:47:48.278236    9025 main.go:141] libmachine: Decoding PEM data...
	I0920 10:47:48.278244    9025 main.go:141] libmachine: Parsing certificate...
	I0920 10:47:48.278286    9025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:47:48.278309    9025 main.go:141] libmachine: Decoding PEM data...
	I0920 10:47:48.278321    9025 main.go:141] libmachine: Parsing certificate...
	I0920 10:47:48.278683    9025 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:47:48.446110    9025 main.go:141] libmachine: Creating SSH key...
	I0920 10:47:48.631414    9025 main.go:141] libmachine: Creating Disk image...
	I0920 10:47:48.631425    9025 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:47:48.631667    9025 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2
	I0920 10:47:48.641352    9025 main.go:141] libmachine: STDOUT: 
	I0920 10:47:48.641379    9025 main.go:141] libmachine: STDERR: 
	I0920 10:47:48.641444    9025 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2 +20000M
	I0920 10:47:48.649493    9025 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:47:48.649509    9025 main.go:141] libmachine: STDERR: 
	I0920 10:47:48.649528    9025 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2
	I0920 10:47:48.649537    9025 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:47:48.649551    9025 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:47:48.649581    9025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:af:98:4e:50:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2
	I0920 10:47:48.651151    9025 main.go:141] libmachine: STDOUT: 
	I0920 10:47:48.651165    9025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:47:48.651186    9025 client.go:171] duration metric: took 373.032292ms to LocalClient.Create
	I0920 10:47:50.653486    9025 start.go:128] duration metric: took 2.394624084s to createHost
	I0920 10:47:50.653579    9025 start.go:83] releasing machines lock for "kubernetes-upgrade-279000", held for 2.394775458s
	W0920 10:47:50.653655    9025 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:47:50.666009    9025 out.go:177] * Deleting "kubernetes-upgrade-279000" in qemu2 ...
	W0920 10:47:50.700194    9025 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:47:50.700222    9025 start.go:729] Will try again in 5 seconds ...
	I0920 10:47:55.702434    9025 start.go:360] acquireMachinesLock for kubernetes-upgrade-279000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:47:55.702842    9025 start.go:364] duration metric: took 331.875µs to acquireMachinesLock for "kubernetes-upgrade-279000"
	I0920 10:47:55.702913    9025 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-279000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-279000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:47:55.703068    9025 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:47:55.715026    9025 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:47:55.751318    9025 start.go:159] libmachine.API.Create for "kubernetes-upgrade-279000" (driver="qemu2")
	I0920 10:47:55.751367    9025 client.go:168] LocalClient.Create starting
	I0920 10:47:55.751472    9025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:47:55.751535    9025 main.go:141] libmachine: Decoding PEM data...
	I0920 10:47:55.751549    9025 main.go:141] libmachine: Parsing certificate...
	I0920 10:47:55.751604    9025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:47:55.751639    9025 main.go:141] libmachine: Decoding PEM data...
	I0920 10:47:55.751650    9025 main.go:141] libmachine: Parsing certificate...
	I0920 10:47:55.752095    9025 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:47:55.921447    9025 main.go:141] libmachine: Creating SSH key...
	I0920 10:47:56.025295    9025 main.go:141] libmachine: Creating Disk image...
	I0920 10:47:56.025303    9025 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:47:56.025513    9025 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2
	I0920 10:47:56.035065    9025 main.go:141] libmachine: STDOUT: 
	I0920 10:47:56.035082    9025 main.go:141] libmachine: STDERR: 
	I0920 10:47:56.035136    9025 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2 +20000M
	I0920 10:47:56.043098    9025 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:47:56.043113    9025 main.go:141] libmachine: STDERR: 
	I0920 10:47:56.043127    9025 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2
	I0920 10:47:56.043139    9025 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:47:56.043147    9025 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:47:56.043180    9025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:92:dc:d1:78:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2
	I0920 10:47:56.044834    9025 main.go:141] libmachine: STDOUT: 
	I0920 10:47:56.044848    9025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:47:56.044858    9025 client.go:171] duration metric: took 293.486ms to LocalClient.Create
	I0920 10:47:58.046977    9025 start.go:128] duration metric: took 2.343903541s to createHost
	I0920 10:47:58.047028    9025 start.go:83] releasing machines lock for "kubernetes-upgrade-279000", held for 2.344171083s
	W0920 10:47:58.047120    9025 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-279000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-279000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:47:58.057329    9025 out.go:201] 
	W0920 10:47:58.060347    9025 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:47:58.060351    9025 out.go:270] * 
	* 
	W0920 10:47:58.060795    9025 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:47:58.075302    9025 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-279000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-279000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-279000: (3.501805375s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-279000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-279000 status --format={{.Host}}: exit status 7 (65.10825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-279000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-279000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.176205834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-279000" primary control-plane node in "kubernetes-upgrade-279000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-279000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-279000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:48:01.682799    9061 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:48:01.682921    9061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:01.682925    9061 out.go:358] Setting ErrFile to fd 2...
	I0920 10:48:01.682927    9061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:48:01.683094    9061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:48:01.684150    9061 out.go:352] Setting JSON to false
	I0920 10:48:01.700729    9061 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6452,"bootTime":1726848029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:48:01.700818    9061 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:48:01.705383    9061 out.go:177] * [kubernetes-upgrade-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:48:01.713311    9061 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:48:01.713339    9061 notify.go:220] Checking for updates...
	I0920 10:48:01.721297    9061 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:48:01.725156    9061 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:48:01.729296    9061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:48:01.732314    9061 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:48:01.733804    9061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:48:01.737577    9061 config.go:182] Loaded profile config "kubernetes-upgrade-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0920 10:48:01.737833    9061 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:48:01.741294    9061 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:48:01.746305    9061 start.go:297] selected driver: qemu2
	I0920 10:48:01.746313    9061 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-279000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-279000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:48:01.746385    9061 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:48:01.748509    9061 cni.go:84] Creating CNI manager for ""
	I0920 10:48:01.748541    9061 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:48:01.748569    9061 start.go:340] cluster config:
	{Name:kubernetes-upgrade-279000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-279000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:48:01.751760    9061 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:48:01.760240    9061 out.go:177] * Starting "kubernetes-upgrade-279000" primary control-plane node in "kubernetes-upgrade-279000" cluster
	I0920 10:48:01.764281    9061 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:48:01.764303    9061 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:48:01.764318    9061 cache.go:56] Caching tarball of preloaded images
	I0920 10:48:01.764383    9061 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:48:01.764390    9061 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:48:01.764445    9061 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/kubernetes-upgrade-279000/config.json ...
	I0920 10:48:01.764804    9061 start.go:360] acquireMachinesLock for kubernetes-upgrade-279000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:48:01.764831    9061 start.go:364] duration metric: took 20.292µs to acquireMachinesLock for "kubernetes-upgrade-279000"
	I0920 10:48:01.764840    9061 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:48:01.764843    9061 fix.go:54] fixHost starting: 
	I0920 10:48:01.764951    9061 fix.go:112] recreateIfNeeded on kubernetes-upgrade-279000: state=Stopped err=<nil>
	W0920 10:48:01.764959    9061 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:48:01.773288    9061 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-279000" ...
	I0920 10:48:01.777303    9061 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:48:01.777337    9061 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:92:dc:d1:78:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2
	I0920 10:48:01.779151    9061 main.go:141] libmachine: STDOUT: 
	I0920 10:48:01.779166    9061 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:48:01.779192    9061 fix.go:56] duration metric: took 14.346042ms for fixHost
	I0920 10:48:01.779196    9061 start.go:83] releasing machines lock for "kubernetes-upgrade-279000", held for 14.361125ms
	W0920 10:48:01.779201    9061 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:48:01.779233    9061 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:48:01.779237    9061 start.go:729] Will try again in 5 seconds ...
	I0920 10:48:06.781297    9061 start.go:360] acquireMachinesLock for kubernetes-upgrade-279000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:48:06.781472    9061 start.go:364] duration metric: took 143.209µs to acquireMachinesLock for "kubernetes-upgrade-279000"
	I0920 10:48:06.781528    9061 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:48:06.781535    9061 fix.go:54] fixHost starting: 
	I0920 10:48:06.781771    9061 fix.go:112] recreateIfNeeded on kubernetes-upgrade-279000: state=Stopped err=<nil>
	W0920 10:48:06.781780    9061 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:48:06.786160    9061 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-279000" ...
	I0920 10:48:06.793979    9061 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:48:06.794031    9061 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:92:dc:d1:78:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubernetes-upgrade-279000/disk.qcow2
	I0920 10:48:06.797638    9061 main.go:141] libmachine: STDOUT: 
	I0920 10:48:06.797656    9061 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:48:06.797685    9061 fix.go:56] duration metric: took 16.150833ms for fixHost
	I0920 10:48:06.797690    9061 start.go:83] releasing machines lock for "kubernetes-upgrade-279000", held for 16.211083ms
	W0920 10:48:06.797745    9061 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-279000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-279000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:48:06.805949    9061 out.go:201] 
	W0920 10:48:06.809995    9061 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:48:06.810005    9061 out.go:270] * 
	* 
	W0920 10:48:06.810809    9061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:48:06.824012    9061 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-279000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-279000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-279000 version --output=json: exit status 1 (36.994292ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-279000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-20 10:48:06.866452 -0700 PDT m=+962.723347626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-279000 -n kubernetes-upgrade-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-279000 -n kubernetes-upgrade-279000: exit status 7 (30.301209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-279000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-279000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-279000
--- FAIL: TestKubernetesUpgrade (18.83s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19679
- KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3063813566/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.03s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.06s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19679
- KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2690079625/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3174366902 start -p stopped-upgrade-770000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3174366902 start -p stopped-upgrade-770000 --memory=2200 --vm-driver=qemu2 : (40.55257575s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3174366902 -p stopped-upgrade-770000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3174366902 -p stopped-upgrade-770000 stop: (12.115187291s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-770000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-770000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.382250833s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-770000" primary control-plane node in "stopped-upgrade-770000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-770000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:49:00.737539    9094 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:49:00.737726    9094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:00.737730    9094 out.go:358] Setting ErrFile to fd 2...
	I0920 10:49:00.737733    9094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:49:00.737906    9094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:49:00.739100    9094 out.go:352] Setting JSON to false
	I0920 10:49:00.758234    9094 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6511,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:49:00.758308    9094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:49:00.762691    9094 out.go:177] * [stopped-upgrade-770000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:49:00.769632    9094 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:49:00.769663    9094 notify.go:220] Checking for updates...
	I0920 10:49:00.776692    9094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:49:00.780655    9094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:49:00.783679    9094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:49:00.786698    9094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:49:00.789616    9094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:49:00.793024    9094 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:49:00.796672    9094 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:49:00.799654    9094 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:49:00.803616    9094 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:49:00.810609    9094 start.go:297] selected driver: qemu2
	I0920 10:49:00.810615    9094 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51545 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:49:00.810663    9094 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:49:00.813438    9094 cni.go:84] Creating CNI manager for ""
	I0920 10:49:00.813470    9094 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:49:00.813507    9094 start.go:340] cluster config:
	{Name:stopped-upgrade-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51545 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:49:00.813558    9094 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:49:00.819020    9094 out.go:177] * Starting "stopped-upgrade-770000" primary control-plane node in "stopped-upgrade-770000" cluster
	I0920 10:49:00.822618    9094 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:49:00.822633    9094 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0920 10:49:00.822641    9094 cache.go:56] Caching tarball of preloaded images
	I0920 10:49:00.822693    9094 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:49:00.822699    9094 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0920 10:49:00.822746    9094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/config.json ...
	I0920 10:49:00.823074    9094 start.go:360] acquireMachinesLock for stopped-upgrade-770000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:49:00.823101    9094 start.go:364] duration metric: took 20.25µs to acquireMachinesLock for "stopped-upgrade-770000"
	I0920 10:49:00.823110    9094 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:49:00.823115    9094 fix.go:54] fixHost starting: 
	I0920 10:49:00.823226    9094 fix.go:112] recreateIfNeeded on stopped-upgrade-770000: state=Stopped err=<nil>
	W0920 10:49:00.823234    9094 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:49:00.831612    9094 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-770000" ...
	I0920 10:49:00.835666    9094 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:49:00.835774    9094 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51511-:22,hostfwd=tcp::51512-:2376,hostname=stopped-upgrade-770000 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/disk.qcow2
	I0920 10:49:00.882989    9094 main.go:141] libmachine: STDOUT: 
	I0920 10:49:00.883009    9094 main.go:141] libmachine: STDERR: 
	I0920 10:49:00.883016    9094 main.go:141] libmachine: Waiting for VM to start (ssh -p 51511 docker@127.0.0.1)...
	I0920 10:49:20.572309    9094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/config.json ...
	I0920 10:49:20.573100    9094 machine.go:93] provisionDockerMachine start ...
	I0920 10:49:20.573308    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.573851    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.573869    9094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 10:49:20.647290    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 10:49:20.647323    9094 buildroot.go:166] provisioning hostname "stopped-upgrade-770000"
	I0920 10:49:20.647450    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.647709    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.647726    9094 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-770000 && echo "stopped-upgrade-770000" | sudo tee /etc/hostname
	I0920 10:49:20.716733    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-770000
	
	I0920 10:49:20.716793    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.716926    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.716936    9094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-770000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-770000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-770000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 10:49:20.778907    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 10:49:20.778918    9094 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19679-6783/.minikube CaCertPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19679-6783/.minikube}
	I0920 10:49:20.778925    9094 buildroot.go:174] setting up certificates
	I0920 10:49:20.778930    9094 provision.go:84] configureAuth start
	I0920 10:49:20.778934    9094 provision.go:143] copyHostCerts
	I0920 10:49:20.778998    9094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19679-6783/.minikube/cert.pem, removing ...
	I0920 10:49:20.779005    9094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19679-6783/.minikube/cert.pem
	I0920 10:49:20.779249    9094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19679-6783/.minikube/cert.pem (1123 bytes)
	I0920 10:49:20.779426    9094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19679-6783/.minikube/key.pem, removing ...
	I0920 10:49:20.779431    9094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19679-6783/.minikube/key.pem
	I0920 10:49:20.779479    9094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19679-6783/.minikube/key.pem (1675 bytes)
	I0920 10:49:20.779589    9094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.pem, removing ...
	I0920 10:49:20.779593    9094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.pem
	I0920 10:49:20.779638    9094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.pem (1078 bytes)
	I0920 10:49:20.779729    9094 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-770000 san=[127.0.0.1 localhost minikube stopped-upgrade-770000]
	I0920 10:49:20.823212    9094 provision.go:177] copyRemoteCerts
	I0920 10:49:20.823247    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 10:49:20.823254    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	I0920 10:49:20.853398    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0920 10:49:20.860281    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 10:49:20.867441    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 10:49:20.874383    9094 provision.go:87] duration metric: took 95.44475ms to configureAuth
	I0920 10:49:20.874392    9094 buildroot.go:189] setting minikube options for container-runtime
	I0920 10:49:20.874500    9094 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:49:20.874551    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.874637    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.874641    9094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 10:49:20.931824    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 10:49:20.931841    9094 buildroot.go:70] root file system type: tmpfs
	I0920 10:49:20.931891    9094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 10:49:20.931953    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.932070    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.932106    9094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 10:49:20.996315    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 10:49:20.996378    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:20.996508    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:20.996519    9094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 10:49:21.355125    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 10:49:21.355140    9094 machine.go:96] duration metric: took 782.0315ms to provisionDockerMachine
	I0920 10:49:21.355148    9094 start.go:293] postStartSetup for "stopped-upgrade-770000" (driver="qemu2")
	I0920 10:49:21.355155    9094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 10:49:21.355225    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 10:49:21.355235    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	I0920 10:49:21.386300    9094 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 10:49:21.387601    9094 info.go:137] Remote host: Buildroot 2021.02.12
	I0920 10:49:21.387608    9094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19679-6783/.minikube/addons for local assets ...
	I0920 10:49:21.387688    9094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19679-6783/.minikube/files for local assets ...
	I0920 10:49:21.387785    9094 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem -> 72792.pem in /etc/ssl/certs
	I0920 10:49:21.387885    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 10:49:21.390777    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem --> /etc/ssl/certs/72792.pem (1708 bytes)
	I0920 10:49:21.397872    9094 start.go:296] duration metric: took 42.71925ms for postStartSetup
	I0920 10:49:21.397886    9094 fix.go:56] duration metric: took 20.57484875s for fixHost
	I0920 10:49:21.397932    9094 main.go:141] libmachine: Using SSH client type: native
	I0920 10:49:21.398038    9094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d1c00] 0x1010d4440 <nil>  [] 0s} localhost 51511 <nil> <nil>}
	I0920 10:49:21.398043    9094 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 10:49:21.453478    9094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726854561.749706837
	
	I0920 10:49:21.453486    9094 fix.go:216] guest clock: 1726854561.749706837
	I0920 10:49:21.453489    9094 fix.go:229] Guest: 2024-09-20 10:49:21.749706837 -0700 PDT Remote: 2024-09-20 10:49:21.397888 -0700 PDT m=+20.692017418 (delta=351.818837ms)
	I0920 10:49:21.453500    9094 fix.go:200] guest clock delta is within tolerance: 351.818837ms
	I0920 10:49:21.453503    9094 start.go:83] releasing machines lock for "stopped-upgrade-770000", held for 20.630474458s
	I0920 10:49:21.453571    9094 ssh_runner.go:195] Run: cat /version.json
	I0920 10:49:21.453581    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	I0920 10:49:21.453572    9094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 10:49:21.453618    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	W0920 10:49:21.454151    9094 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51511: connect: connection refused
	I0920 10:49:21.454172    9094 retry.go:31] will retry after 339.055571ms: dial tcp [::1]:51511: connect: connection refused
	W0920 10:49:21.484290    9094 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0920 10:49:21.484337    9094 ssh_runner.go:195] Run: systemctl --version
	I0920 10:49:21.486191    9094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 10:49:21.487734    9094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 10:49:21.487766    9094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0920 10:49:21.491018    9094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0920 10:49:21.495963    9094 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 10:49:21.495971    9094 start.go:495] detecting cgroup driver to use...
	I0920 10:49:21.496048    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:49:21.502606    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0920 10:49:21.505740    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 10:49:21.508544    9094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 10:49:21.508571    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 10:49:21.511771    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:49:21.515293    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 10:49:21.518627    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 10:49:21.521470    9094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 10:49:21.524246    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 10:49:21.527615    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 10:49:21.531124    9094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 10:49:21.534557    9094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 10:49:21.537028    9094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 10:49:21.539892    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:21.588038    9094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 10:49:21.598680    9094 start.go:495] detecting cgroup driver to use...
	I0920 10:49:21.598757    9094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 10:49:21.604276    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:49:21.610887    9094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 10:49:21.619723    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 10:49:21.624665    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:49:21.629461    9094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 10:49:21.679368    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 10:49:21.684771    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 10:49:21.690036    9094 ssh_runner.go:195] Run: which cri-dockerd
	I0920 10:49:21.691241    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 10:49:21.694187    9094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0920 10:49:21.699070    9094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 10:49:21.765389    9094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 10:49:21.842844    9094 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 10:49:21.842901    9094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 10:49:21.848155    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:21.928957    9094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:49:23.038802    9094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.109832542s)
	I0920 10:49:23.038867    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 10:49:23.043571    9094 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0920 10:49:23.049619    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:49:23.053961    9094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 10:49:23.124408    9094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 10:49:23.196973    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:23.267225    9094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 10:49:23.273254    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 10:49:23.278015    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:23.355551    9094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 10:49:23.398536    9094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 10:49:23.398630    9094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 10:49:23.401475    9094 start.go:563] Will wait 60s for crictl version
	I0920 10:49:23.401542    9094 ssh_runner.go:195] Run: which crictl
	I0920 10:49:23.402864    9094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 10:49:23.417218    9094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0920 10:49:23.417307    9094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:49:23.433451    9094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 10:49:23.450081    9094 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0920 10:49:23.450168    9094 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0920 10:49:23.451452    9094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:49:23.455554    9094 kubeadm.go:883] updating cluster {Name:stopped-upgrade-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51545 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0920 10:49:23.455599    9094 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0920 10:49:23.455652    9094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:49:23.465735    9094 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:49:23.465744    9094 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:49:23.465805    9094 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:49:23.468711    9094 ssh_runner.go:195] Run: which lz4
	I0920 10:49:23.469980    9094 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 10:49:23.471078    9094 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 10:49:23.471088    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0920 10:49:24.415723    9094 docker.go:649] duration metric: took 945.790125ms to copy over tarball
	I0920 10:49:24.415787    9094 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 10:49:25.585253    9094 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.169457292s)
	I0920 10:49:25.585266    9094 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 10:49:25.600768    9094 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 10:49:25.603941    9094 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0920 10:49:25.609359    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:25.684770    9094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 10:49:27.329191    9094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.644411583s)
	I0920 10:49:27.329315    9094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 10:49:27.340134    9094 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 10:49:27.340145    9094 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0920 10:49:27.340150    9094 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 10:49:27.345301    9094 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:49:27.347604    9094 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:49:27.349767    9094 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0920 10:49:27.349804    9094 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:49:27.351975    9094 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:49:27.352119    9094 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:49:27.353611    9094 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0920 10:49:27.353628    9094 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:49:27.354585    9094 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:49:27.355397    9094 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:49:27.355980    9094 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:49:27.356766    9094 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:49:27.357244    9094 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:49:27.357591    9094 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:49:27.358361    9094 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:49:27.359431    9094 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:49:27.753942    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0920 10:49:27.764744    9094 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0920 10:49:27.764773    9094 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0920 10:49:27.764833    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0920 10:49:27.766667    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:49:27.771525    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:49:27.779849    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:49:27.781128    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0920 10:49:27.781237    9094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0920 10:49:27.790167    9094 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0920 10:49:27.790190    9094 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:49:27.790256    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0920 10:49:27.795975    9094 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0920 10:49:27.795995    9094 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:49:27.796054    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0920 10:49:27.798937    9094 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0920 10:49:27.798963    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0920 10:49:27.798969    9094 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0920 10:49:27.798988    9094 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:49:27.799036    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0920 10:49:27.799357    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:49:27.816169    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0920 10:49:27.822738    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0920 10:49:27.822783    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0920 10:49:27.827389    9094 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0920 10:49:27.827406    9094 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:49:27.827471    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0920 10:49:27.828453    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0920 10:49:27.829527    9094 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0920 10:49:27.829533    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0920 10:49:27.839983    9094 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0920 10:49:27.840133    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:49:27.840383    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0920 10:49:27.844493    9094 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0920 10:49:27.844512    9094 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0920 10:49:27.844578    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0920 10:49:27.871040    9094 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0920 10:49:27.871072    9094 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0920 10:49:27.871088    9094 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:49:27.871091    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0920 10:49:27.871142    9094 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0920 10:49:27.871200    9094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:49:27.881228    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0920 10:49:27.881244    9094 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0920 10:49:27.881257    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0920 10:49:27.881357    9094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:49:27.894161    9094 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0920 10:49:27.894205    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0920 10:49:27.981496    9094 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0920 10:49:27.981512    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0920 10:49:28.076503    9094 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0920 10:49:28.177312    9094 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0920 10:49:28.177327    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0920 10:49:28.267656    9094 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0920 10:49:28.267799    9094 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:49:28.331520    9094 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0920 10:49:28.331554    9094 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0920 10:49:28.331574    9094 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:49:28.331650    9094 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:49:28.345746    9094 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 10:49:28.345888    9094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:49:28.347260    9094 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0920 10:49:28.347501    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0920 10:49:28.375147    9094 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 10:49:28.375165    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0920 10:49:28.609655    9094 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 10:49:28.609693    9094 cache_images.go:92] duration metric: took 1.269540292s to LoadCachedImages
	W0920 10:49:28.609738    9094 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0920 10:49:28.609746    9094 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0920 10:49:28.609806    9094 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-770000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 10:49:28.609902    9094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 10:49:28.623073    9094 cni.go:84] Creating CNI manager for ""
	I0920 10:49:28.623086    9094 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:49:28.623096    9094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 10:49:28.623104    9094 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-770000 NodeName:stopped-upgrade-770000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 10:49:28.623177    9094 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-770000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 10:49:28.623241    9094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0920 10:49:28.626826    9094 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 10:49:28.626861    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 10:49:28.629451    9094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0920 10:49:28.634214    9094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 10:49:28.639160    9094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0920 10:49:28.644754    9094 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0920 10:49:28.646022    9094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 10:49:28.649449    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:49:28.727741    9094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:49:28.733454    9094 certs.go:68] Setting up /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000 for IP: 10.0.2.15
	I0920 10:49:28.733463    9094 certs.go:194] generating shared ca certs ...
	I0920 10:49:28.733473    9094 certs.go:226] acquiring lock for ca certs: {Name:mk223deb0e7531c2ef743391b3102022988e9e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:28.733654    9094 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.key
	I0920 10:49:28.733708    9094 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/proxy-client-ca.key
	I0920 10:49:28.733713    9094 certs.go:256] generating profile certs ...
	I0920 10:49:28.733789    9094 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/client.key
	I0920 10:49:28.733806    9094 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key.95f8aeec
	I0920 10:49:28.733815    9094 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt.95f8aeec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0920 10:49:28.907055    9094 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt.95f8aeec ...
	I0920 10:49:28.907072    9094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt.95f8aeec: {Name:mkd934f6f29ee3f1a97421450aecdc94ca438ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:28.908540    9094 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key.95f8aeec ...
	I0920 10:49:28.908547    9094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key.95f8aeec: {Name:mk82aa04d4220c51f383542e5fbc9e62cb636def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:28.908710    9094 certs.go:381] copying /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt.95f8aeec -> /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt
	I0920 10:49:28.908854    9094 certs.go:385] copying /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key.95f8aeec -> /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key
	I0920 10:49:28.909014    9094 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/proxy-client.key
	I0920 10:49:28.909153    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/7279.pem (1338 bytes)
	W0920 10:49:28.909183    9094 certs.go:480] ignoring /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/7279_empty.pem, impossibly tiny 0 bytes
	I0920 10:49:28.909190    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 10:49:28.909216    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem (1078 bytes)
	I0920 10:49:28.909245    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem (1123 bytes)
	I0920 10:49:28.909268    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/key.pem (1675 bytes)
	I0920 10:49:28.909320    9094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem (1708 bytes)
	I0920 10:49:28.909650    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 10:49:28.916485    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 10:49:28.923671    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 10:49:28.931231    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 10:49:28.937754    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 10:49:28.944422    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 10:49:28.951616    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 10:49:28.959070    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 10:49:28.966221    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/ssl/certs/72792.pem --> /usr/share/ca-certificates/72792.pem (1708 bytes)
	I0920 10:49:28.972930    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 10:49:28.979900    9094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/7279.pem --> /usr/share/ca-certificates/7279.pem (1338 bytes)
	I0920 10:49:28.987252    9094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 10:49:28.992735    9094 ssh_runner.go:195] Run: openssl version
	I0920 10:49:28.994644    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72792.pem && ln -fs /usr/share/ca-certificates/72792.pem /etc/ssl/certs/72792.pem"
	I0920 10:49:28.997542    9094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72792.pem
	I0920 10:49:28.998991    9094 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 17:32 /usr/share/ca-certificates/72792.pem
	I0920 10:49:28.999022    9094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72792.pem
	I0920 10:49:29.000961    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72792.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 10:49:29.004135    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 10:49:29.007802    9094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:49:29.009451    9094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:49:29.009476    9094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 10:49:29.011302    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 10:49:29.014488    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7279.pem && ln -fs /usr/share/ca-certificates/7279.pem /etc/ssl/certs/7279.pem"
	I0920 10:49:29.017722    9094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7279.pem
	I0920 10:49:29.019500    9094 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 17:32 /usr/share/ca-certificates/7279.pem
	I0920 10:49:29.019558    9094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7279.pem
	I0920 10:49:29.021721    9094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7279.pem /etc/ssl/certs/51391683.0"
	I0920 10:49:29.025011    9094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 10:49:29.026880    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 10:49:29.029519    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 10:49:29.031887    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 10:49:29.034652    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 10:49:29.037118    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 10:49:29.039605    9094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 10:49:29.042689    9094 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51545 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0920 10:49:29.042800    9094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:49:29.054063    9094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 10:49:29.057189    9094 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 10:49:29.057194    9094 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 10:49:29.057221    9094 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 10:49:29.059991    9094 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 10:49:29.060288    9094 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-770000" does not appear in /Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:49:29.060384    9094 kubeconfig.go:62] /Users/jenkins/minikube-integration/19679-6783/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-770000" cluster setting kubeconfig missing "stopped-upgrade-770000" context setting]
	I0920 10:49:29.060560    9094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/kubeconfig: {Name:mkc202c0538e947b3e0d9844748996d0c112bf36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:49:29.061052    9094 kapi.go:59] client config for stopped-upgrade-770000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/client.key", CAFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1026aa030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:49:29.061417    9094 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 10:49:29.064007    9094 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-770000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0920 10:49:29.064015    9094 kubeadm.go:1160] stopping kube-system containers ...
	I0920 10:49:29.064070    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 10:49:29.075085    9094 docker.go:483] Stopping containers: [0f1f6ae5b381 e7cf43c8a211 cd1e5a8150d3 07e2780d69fa 4d8808795719 0efea235af05 d9ea4bef2395 d9687b348b64]
	I0920 10:49:29.075169    9094 ssh_runner.go:195] Run: docker stop 0f1f6ae5b381 e7cf43c8a211 cd1e5a8150d3 07e2780d69fa 4d8808795719 0efea235af05 d9ea4bef2395 d9687b348b64
	I0920 10:49:29.086241    9094 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 10:49:29.091416    9094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:49:29.094541    9094 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:49:29.094554    9094 kubeadm.go:157] found existing configuration files:
	
	I0920 10:49:29.094579    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/admin.conf
	I0920 10:49:29.097112    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:49:29.097138    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:49:29.099951    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/kubelet.conf
	I0920 10:49:29.102843    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:49:29.102868    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:49:29.105321    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/controller-manager.conf
	I0920 10:49:29.107888    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:49:29.107916    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:49:29.111105    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/scheduler.conf
	I0920 10:49:29.113811    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:49:29.113839    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:49:29.116476    9094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:49:29.119565    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:49:29.143949    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:49:29.503407    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:49:29.625674    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:49:29.651956    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 10:49:29.670162    9094 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:49:29.670251    9094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:49:30.172477    9094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:49:30.672328    9094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:49:30.676386    9094 api_server.go:72] duration metric: took 1.006229083s to wait for apiserver process to appear ...
	I0920 10:49:30.676395    9094 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:49:30.676406    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:35.677987    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:35.678037    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:40.678632    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:40.678652    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:45.678915    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:45.678976    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:50.679688    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:50.679799    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:49:55.681047    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:49:55.681154    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:00.682427    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:00.682452    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:05.683787    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:05.683885    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:10.685951    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:10.686031    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:15.688583    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:15.688622    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:20.689399    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:20.689424    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:25.691775    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:25.691858    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:30.692538    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:30.692648    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:50:30.704089    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:50:30.704176    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:50:30.714866    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:50:30.714952    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:50:30.725545    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:50:30.725617    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:50:30.736000    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:50:30.736091    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:50:30.746567    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:50:30.746641    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:50:30.757578    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:50:30.757664    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:50:30.767548    9094 logs.go:276] 0 containers: []
	W0920 10:50:30.767564    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:50:30.767635    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:50:30.778360    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:50:30.778377    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:50:30.778382    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:50:30.783159    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:50:30.783165    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:50:30.821246    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:50:30.821256    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:50:30.833191    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:50:30.833204    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:50:30.851134    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:50:30.851147    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:50:30.863048    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:50:30.863061    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:50:30.887028    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:50:30.887042    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:50:30.903060    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:50:30.903073    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:50:30.914902    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:50:30.914916    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:50:30.936283    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:50:30.936297    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:50:30.978004    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:50:30.978018    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:50:30.993220    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:50:30.993236    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:50:31.004007    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:50:31.004018    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:50:31.018395    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:50:31.018408    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:50:31.029790    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:50:31.029802    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:50:31.057378    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:50:31.057388    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:50:33.654730    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:38.657142    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:38.657479    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:50:38.684326    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:50:38.684450    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:50:38.706495    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:50:38.706597    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:50:38.724707    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:50:38.724791    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:50:38.735360    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:50:38.735447    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:50:38.746078    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:50:38.746166    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:50:38.756946    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:50:38.757025    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:50:38.767697    9094 logs.go:276] 0 containers: []
	W0920 10:50:38.767710    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:50:38.767781    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:50:38.778364    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:50:38.778392    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:50:38.778399    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:50:38.789953    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:50:38.789963    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:50:38.830476    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:50:38.830484    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:50:38.867404    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:50:38.867416    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:50:38.878952    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:50:38.878965    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:50:38.894939    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:50:38.894951    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:50:38.909686    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:50:38.909701    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:50:38.923831    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:50:38.923843    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:50:38.927923    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:50:38.927929    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:50:38.963377    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:50:38.963388    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:50:38.978589    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:50:38.978601    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:50:39.004186    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:50:39.004196    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:50:39.016633    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:50:39.016647    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:50:39.031079    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:50:39.031093    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:50:39.047714    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:50:39.047729    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:50:39.060567    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:50:39.060578    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:50:41.580160    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:46.582443    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:46.582788    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:50:46.609243    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:50:46.609389    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:50:46.626817    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:50:46.626928    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:50:46.639878    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:50:46.639961    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:50:46.651144    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:50:46.651217    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:50:46.661664    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:50:46.661755    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:50:46.672046    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:50:46.672129    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:50:46.682390    9094 logs.go:276] 0 containers: []
	W0920 10:50:46.682401    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:50:46.682469    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:50:46.692970    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:50:46.692988    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:50:46.692993    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:50:46.704165    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:50:46.704175    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:50:46.728105    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:50:46.728115    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:50:46.766536    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:50:46.766554    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:50:46.784488    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:50:46.784499    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:50:46.801807    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:50:46.801819    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:50:46.819515    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:50:46.819525    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:50:46.823728    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:50:46.823735    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:50:46.859255    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:50:46.859267    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:50:46.871453    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:50:46.871464    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:50:46.884953    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:50:46.884962    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:50:46.898551    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:50:46.898561    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:50:46.936655    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:50:46.936666    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:50:46.947709    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:50:46.947721    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:50:46.959793    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:50:46.959805    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:50:46.973866    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:50:46.973879    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:50:49.487960    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:50:54.489114    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:50:54.489276    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:50:54.502320    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:50:54.502410    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:50:54.513506    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:50:54.513590    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:50:54.524030    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:50:54.524113    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:50:54.534292    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:50:54.534378    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:50:54.544500    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:50:54.544580    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:50:54.555394    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:50:54.555471    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:50:54.565635    9094 logs.go:276] 0 containers: []
	W0920 10:50:54.565647    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:50:54.565718    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:50:54.575664    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:50:54.575680    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:50:54.575685    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:50:54.587295    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:50:54.587306    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:50:54.598882    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:50:54.598894    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:50:54.623985    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:50:54.623996    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:50:54.628163    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:50:54.628170    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:50:54.644202    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:50:54.644212    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:50:54.658656    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:50:54.658666    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:50:54.672228    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:50:54.672237    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:50:54.707707    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:50:54.707722    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:50:54.722497    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:50:54.722507    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:50:54.734287    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:50:54.734299    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:50:54.746248    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:50:54.746259    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:50:54.764658    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:50:54.764672    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:50:54.776276    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:50:54.776292    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:50:54.813198    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:50:54.813212    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:50:54.861283    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:50:54.861296    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:50:57.381490    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:02.384244    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:02.384432    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:02.397134    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:02.397229    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:02.408421    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:02.408523    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:02.418937    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:02.419021    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:02.429390    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:02.429472    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:02.441848    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:02.441928    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:02.452469    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:02.452552    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:02.462902    9094 logs.go:276] 0 containers: []
	W0920 10:51:02.462912    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:02.462979    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:02.474424    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:02.474445    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:02.474451    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:02.511187    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:02.511198    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:02.525363    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:02.525373    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:02.529490    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:02.529500    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:02.540856    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:02.540867    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:02.566266    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:02.566274    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:02.578733    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:02.578748    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:02.593423    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:02.593432    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:02.605057    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:02.605070    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:02.616585    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:02.616598    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:02.634797    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:02.634811    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:02.648348    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:02.648358    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:02.660009    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:02.660021    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:02.695968    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:02.695981    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:02.735562    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:02.735578    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:02.749714    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:02.749728    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:05.266625    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:10.268917    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:10.269199    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:10.294089    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:10.294250    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:10.314827    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:10.314921    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:10.331915    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:10.332000    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:10.342513    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:10.342605    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:10.354149    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:10.354232    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:10.364957    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:10.365041    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:10.375514    9094 logs.go:276] 0 containers: []
	W0920 10:51:10.375528    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:10.375594    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:10.385957    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:10.385973    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:10.385979    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:10.390587    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:10.390594    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:10.406434    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:10.406446    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:10.428172    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:10.428186    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:10.453740    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:10.453753    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:10.488892    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:10.488905    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:10.504074    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:10.504090    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:10.546163    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:10.546174    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:10.560742    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:10.560752    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:10.575178    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:10.575193    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:10.613429    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:10.613440    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:10.629041    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:10.629058    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:10.641405    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:10.641415    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:10.656041    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:10.656051    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:10.667292    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:10.667304    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:10.681188    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:10.681199    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:13.195142    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:18.197623    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:18.198068    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:18.231968    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:18.232127    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:18.250875    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:18.250976    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:18.264461    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:18.264551    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:18.276389    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:18.276476    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:18.287037    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:18.287122    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:18.297464    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:18.297546    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:18.308688    9094 logs.go:276] 0 containers: []
	W0920 10:51:18.308699    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:18.308771    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:18.319847    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:18.319866    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:18.319871    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:18.334520    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:18.334530    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:18.352249    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:18.352260    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:18.364353    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:18.364363    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:18.378571    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:18.378579    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:18.390123    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:18.390135    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:18.395035    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:18.395043    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:18.431102    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:18.431113    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:18.470649    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:18.470667    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:18.483121    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:18.483135    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:18.524045    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:18.524058    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:18.535808    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:18.535821    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:18.551821    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:18.551831    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:18.569356    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:18.569366    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:18.583595    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:18.583607    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:18.601139    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:18.601155    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:21.130294    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:26.132669    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:26.132981    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:26.162875    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:26.163022    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:26.180110    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:26.180208    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:26.198644    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:26.198732    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:26.213131    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:26.213213    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:26.223846    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:26.223926    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:26.235007    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:26.235082    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:26.245639    9094 logs.go:276] 0 containers: []
	W0920 10:51:26.245651    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:26.245726    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:26.256304    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:26.256322    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:26.256328    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:26.271398    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:26.271409    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:26.283473    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:26.283484    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:26.295389    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:26.295399    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:26.334293    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:26.334302    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:26.339106    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:26.339115    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:26.376633    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:26.376646    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:26.399727    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:26.399734    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:26.435099    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:26.435110    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:26.449451    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:26.449465    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:26.460925    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:26.460939    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:26.474696    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:26.474707    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:26.490753    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:26.490767    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:26.502750    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:26.502767    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:26.517234    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:26.517248    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:26.534381    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:26.534394    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:29.050057    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:34.051053    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:34.051232    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:34.062722    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:34.062813    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:34.073627    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:34.073709    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:34.084911    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:34.084998    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:34.095748    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:34.095835    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:34.106761    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:34.106845    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:34.117390    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:34.117475    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:34.131139    9094 logs.go:276] 0 containers: []
	W0920 10:51:34.131150    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:34.131219    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:34.145681    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:34.145697    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:34.145702    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:34.157016    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:34.157026    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:34.168512    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:34.168521    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:34.183099    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:34.183114    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:34.195098    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:34.195110    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:34.211398    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:34.211408    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:34.225504    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:34.225518    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:34.240115    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:34.240126    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:34.258640    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:34.258655    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:34.274153    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:34.274169    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:34.288046    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:34.288058    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:34.327320    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:34.327333    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:34.365632    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:34.365644    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:34.388576    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:34.388583    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:34.400296    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:34.400311    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:34.404576    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:34.404582    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:36.947218    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:41.948079    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:41.948348    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:41.968336    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:41.968443    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:41.981844    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:41.981932    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:41.995562    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:41.995643    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:42.006622    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:42.006710    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:42.018278    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:42.018360    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:42.029103    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:42.029181    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:42.039864    9094 logs.go:276] 0 containers: []
	W0920 10:51:42.039877    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:42.039944    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:42.052626    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:42.052644    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:42.052649    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:42.070776    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:42.070786    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:42.088943    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:42.088954    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:42.100698    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:42.100709    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:42.114216    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:42.114226    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:42.126898    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:42.126912    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:42.164393    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:42.164407    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:42.178405    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:42.178413    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:42.189846    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:42.189858    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:42.213681    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:42.213706    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:42.233006    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:42.233020    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:42.271360    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:42.271367    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:42.275297    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:42.275303    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:42.289419    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:42.289429    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:42.307701    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:42.307715    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:42.342182    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:42.342197    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:44.857213    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:49.859809    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:49.860048    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:49.879157    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:49.879278    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:49.896472    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:49.896567    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:49.909431    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:49.909512    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:49.919661    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:49.919736    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:49.930384    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:49.930461    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:49.941566    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:49.941646    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:49.951243    9094 logs.go:276] 0 containers: []
	W0920 10:51:49.951255    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:49.951328    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:49.964304    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:49.964321    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:49.964328    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:49.982126    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:49.982141    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:49.993999    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:49.994012    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:50.008516    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:50.008527    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:50.022337    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:50.022348    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:50.063082    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:50.063102    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:50.101385    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:50.101400    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:50.116397    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:50.116407    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:50.139884    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:50.139910    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:51:50.144127    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:50.144133    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:50.158496    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:50.158506    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:50.172797    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:50.172805    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:50.183638    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:50.183649    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:50.195505    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:50.195514    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:50.207358    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:50.207373    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:50.219234    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:50.219245    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:52.756321    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:51:57.756787    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:51:57.757310    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:51:57.793279    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:51:57.793434    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:51:57.813035    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:51:57.813145    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:51:57.827646    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:51:57.827744    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:51:57.839997    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:51:57.840083    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:51:57.854240    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:51:57.854324    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:51:57.865035    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:51:57.865118    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:51:57.884386    9094 logs.go:276] 0 containers: []
	W0920 10:51:57.884398    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:51:57.884471    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:51:57.894941    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:51:57.894958    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:51:57.894964    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:51:57.910432    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:51:57.910443    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:51:57.924570    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:51:57.924581    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:51:57.939210    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:51:57.939221    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:51:57.953231    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:51:57.953240    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:51:57.966521    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:51:57.966533    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:51:58.004103    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:51:58.004117    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:51:58.016578    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:51:58.016589    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:51:58.030758    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:51:58.030768    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:51:58.046313    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:51:58.046323    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:51:58.070950    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:51:58.070960    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:51:58.083357    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:51:58.083369    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:51:58.095140    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:51:58.095153    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:51:58.133564    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:51:58.133576    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:51:58.150835    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:51:58.150846    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:51:58.189415    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:51:58.189425    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:00.695604    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:05.697955    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:05.698371    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:05.738725    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:05.738888    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:05.759938    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:05.760060    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:05.775586    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:05.775682    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:05.788415    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:05.788502    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:05.803707    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:05.803788    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:05.820853    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:05.820939    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:05.831041    9094 logs.go:276] 0 containers: []
	W0920 10:52:05.831054    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:05.831123    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:05.842006    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:05.842023    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:05.842029    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:05.880448    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:05.880458    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:05.897421    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:05.897436    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:05.927450    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:05.927466    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:05.941554    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:05.941564    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:05.946040    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:05.946047    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:05.986648    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:05.986663    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:06.001365    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:06.001381    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:06.040956    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:06.040973    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:06.052982    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:06.052992    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:06.067255    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:06.067266    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:06.079672    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:06.079683    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:06.097334    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:06.097345    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:06.109006    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:06.109017    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:06.124161    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:06.124177    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:06.144780    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:06.144791    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:08.659274    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:13.661962    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:13.662184    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:13.674861    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:13.674953    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:13.685889    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:13.685964    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:13.696066    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:13.696143    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:13.706594    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:13.706672    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:13.721882    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:13.721967    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:13.732641    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:13.732714    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:13.742544    9094 logs.go:276] 0 containers: []
	W0920 10:52:13.742555    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:13.742619    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:13.756937    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:13.756955    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:13.756961    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:13.796592    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:13.796613    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:13.807958    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:13.807969    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:13.822170    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:13.822181    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:13.846171    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:13.846183    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:13.882104    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:13.882114    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:13.905344    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:13.905350    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:13.916858    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:13.916869    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:13.920796    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:13.920802    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:13.934621    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:13.934632    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:13.953861    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:13.953877    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:13.967577    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:13.967591    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:13.980098    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:13.980110    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:13.998359    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:13.998373    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:14.036310    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:14.036324    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:14.050458    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:14.050471    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:16.562192    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:21.564655    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:21.565012    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:21.593923    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:21.594063    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:21.611596    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:21.611705    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:21.625755    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:21.625848    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:21.637346    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:21.637442    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:21.647997    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:21.648082    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:21.658804    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:21.658880    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:21.668826    9094 logs.go:276] 0 containers: []
	W0920 10:52:21.668838    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:21.668910    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:21.688930    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:21.688950    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:21.688955    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:21.707816    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:21.707826    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:21.721805    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:21.721820    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:21.744977    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:21.744984    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:21.756900    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:21.756915    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:21.761401    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:21.761408    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:21.799736    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:21.799747    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:21.813048    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:21.813061    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:21.831143    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:21.831154    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:21.867938    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:21.867947    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:21.882157    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:21.882166    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:21.893634    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:21.893646    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:21.909627    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:21.909638    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:21.943674    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:21.943685    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:21.957915    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:21.957926    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:21.970184    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:21.970196    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:24.487990    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:29.490341    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:29.490552    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:29.512509    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:29.512592    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:29.523126    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:29.523207    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:29.533968    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:29.534050    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:29.544508    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:29.544592    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:29.555584    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:29.555666    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:29.566170    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:29.566259    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:29.576919    9094 logs.go:276] 0 containers: []
	W0920 10:52:29.576930    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:29.577001    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:29.587231    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:29.587247    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:29.587253    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:29.623855    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:29.623865    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:29.637660    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:29.637673    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:29.660622    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:29.660628    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:29.664524    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:29.664530    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:29.698078    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:29.698090    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:29.716743    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:29.716753    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:29.731052    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:29.731062    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:29.742779    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:29.742789    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:29.761423    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:29.761434    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:29.776679    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:29.776689    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:29.794113    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:29.794125    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:29.832666    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:29.832676    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:29.848383    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:29.848398    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:29.860131    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:29.860141    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:29.873663    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:29.873673    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:32.387459    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:37.388012    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:37.388466    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:37.426284    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:37.426415    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:37.444211    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:37.444315    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:37.457925    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:37.458006    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:37.473217    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:37.473300    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:37.484034    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:37.484118    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:37.494837    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:37.494924    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:37.510816    9094 logs.go:276] 0 containers: []
	W0920 10:52:37.510830    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:37.510901    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:37.523437    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:37.523456    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:37.523462    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:37.538464    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:37.538478    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:37.550565    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:37.550576    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:37.586888    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:37.586896    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:37.624852    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:37.624862    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:37.636228    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:37.636240    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:37.663980    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:37.663995    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:37.678381    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:37.678392    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:37.693246    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:37.693256    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:37.705377    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:37.705388    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:37.729215    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:37.729222    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:37.740881    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:37.740891    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:37.745315    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:37.745322    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:37.780551    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:37.780562    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:37.795184    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:37.795194    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:37.809714    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:37.809724    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:40.321775    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:45.320593    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:45.321044    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:45.352527    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:45.352693    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:45.370913    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:45.371024    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:45.384711    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:45.384804    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:45.396379    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:45.396460    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:45.406917    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:45.406998    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:45.417754    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:45.417835    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:45.428471    9094 logs.go:276] 0 containers: []
	W0920 10:52:45.428482    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:45.428543    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:45.438613    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:45.438631    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:45.438637    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:45.481490    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:45.481508    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:45.499169    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:45.499185    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:45.510302    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:45.510316    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:45.527610    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:45.527626    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:45.552177    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:45.552188    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:45.556343    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:45.556350    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:45.569979    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:45.569993    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:45.581775    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:45.581788    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:45.600260    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:45.600272    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:45.612361    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:45.612371    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:45.650988    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:45.650996    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:45.669926    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:45.669937    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:45.681380    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:45.681394    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:45.694102    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:45.694115    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:45.729164    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:45.729175    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:48.253674    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:52:53.254922    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:52:53.255294    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:52:53.302387    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:52:53.302485    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:52:53.320028    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:52:53.320108    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:52:53.330941    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:52:53.331017    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:52:53.341571    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:52:53.341662    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:52:53.352533    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:52:53.352620    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:52:53.363015    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:52:53.363093    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:52:53.373140    9094 logs.go:276] 0 containers: []
	W0920 10:52:53.373151    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:52:53.373221    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:52:53.383485    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:52:53.383501    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:52:53.383507    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:52:53.417940    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:52:53.417956    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:52:53.432808    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:52:53.432822    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:52:53.447788    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:52:53.447800    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:52:53.459667    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:52:53.459677    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:52:53.471493    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:52:53.471503    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:52:53.475780    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:52:53.475788    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:52:53.490578    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:52:53.490589    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:52:53.509264    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:52:53.509273    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:52:53.532618    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:52:53.532630    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:52:53.570552    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:52:53.570564    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:52:53.581944    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:52:53.581953    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:52:53.593610    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:52:53.593621    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:52:53.632296    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:52:53.632304    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:52:53.643852    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:52:53.643862    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:52:53.658855    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:52:53.658865    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:52:56.181707    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:01.182520    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:01.182798    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:01.203670    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:53:01.203783    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:01.218179    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:53:01.218271    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:01.230676    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:53:01.230760    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:01.241921    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:53:01.242001    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:01.252444    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:53:01.252525    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:01.262933    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:53:01.263018    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:01.273471    9094 logs.go:276] 0 containers: []
	W0920 10:53:01.273483    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:01.273557    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:01.283942    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:53:01.283958    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:01.283964    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:01.322991    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:01.323004    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:01.327110    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:53:01.327116    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:53:01.342198    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:01.342208    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:01.381899    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:53:01.381913    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:53:01.400304    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:01.400314    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:01.423241    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:53:01.423248    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:01.434805    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:53:01.434817    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:53:01.447391    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:53:01.447405    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:53:01.462138    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:53:01.462152    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:53:01.480550    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:53:01.480565    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:53:01.497047    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:53:01.497059    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:53:01.517904    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:53:01.517915    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:53:01.556265    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:53:01.556278    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:53:01.570336    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:53:01.570346    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:53:01.582610    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:53:01.582623    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:53:04.098420    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:09.100329    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:09.100869    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:09.139483    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:53:09.139656    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:09.160285    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:53:09.160424    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:09.178313    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:53:09.178396    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:09.190491    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:53:09.190581    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:09.206638    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:53:09.206733    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:09.218539    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:53:09.218625    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:09.229574    9094 logs.go:276] 0 containers: []
	W0920 10:53:09.229584    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:09.229649    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:09.240380    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:53:09.240404    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:53:09.240409    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:53:09.255357    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:53:09.255368    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:53:09.267481    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:53:09.267491    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:53:09.279420    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:09.279430    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:09.301884    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:53:09.301891    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:53:09.316374    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:53:09.316386    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:53:09.328191    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:53:09.328201    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:53:09.340130    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:53:09.340140    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:53:09.357507    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:53:09.357518    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:53:09.372292    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:53:09.372302    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:09.385131    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:09.385141    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:09.422191    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:53:09.422203    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:53:09.461266    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:53:09.461287    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:53:09.475747    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:09.475756    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:09.480356    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:09.480362    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:09.516360    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:53:09.516374    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:53:12.033764    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:17.034113    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:17.034408    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:17.060279    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:53:17.060425    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:17.078790    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:53:17.078882    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:17.091771    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:53:17.091866    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:17.102730    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:53:17.102811    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:17.112845    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:53:17.112926    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:17.123615    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:53:17.123695    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:17.133844    9094 logs.go:276] 0 containers: []
	W0920 10:53:17.133859    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:17.133926    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:17.144249    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:53:17.144265    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:17.144270    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:17.180114    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:53:17.180130    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:53:17.192066    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:17.192080    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:17.196488    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:53:17.196495    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:53:17.233288    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:53:17.233301    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:53:17.251290    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:53:17.251305    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:53:17.264867    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:17.264885    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:17.304278    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:53:17.304292    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:53:17.324607    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:53:17.324619    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:53:17.338526    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:53:17.338539    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:53:17.350276    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:53:17.350289    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:53:17.364979    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:53:17.364993    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:53:17.379181    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:53:17.379195    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:53:17.391524    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:53:17.391540    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:53:17.403161    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:17.403174    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:17.425405    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:53:17.425414    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:19.939510    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:24.941676    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:24.941996    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:53:24.966123    9094 logs.go:276] 2 containers: [9e153969f1f5 e7cf43c8a211]
	I0920 10:53:24.966271    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:53:24.981929    9094 logs.go:276] 2 containers: [97e934094d19 07e2780d69fa]
	I0920 10:53:24.982021    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:53:24.995030    9094 logs.go:276] 1 containers: [a59f98fbd24a]
	I0920 10:53:24.995105    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:53:25.009905    9094 logs.go:276] 2 containers: [0184d4d42752 cd1e5a8150d3]
	I0920 10:53:25.009985    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:53:25.020577    9094 logs.go:276] 1 containers: [a29a2a58ab03]
	I0920 10:53:25.020654    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:53:25.033371    9094 logs.go:276] 2 containers: [9b57117e8fc7 0f1f6ae5b381]
	I0920 10:53:25.033458    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:53:25.043270    9094 logs.go:276] 0 containers: []
	W0920 10:53:25.043286    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:53:25.043356    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:53:25.053502    9094 logs.go:276] 1 containers: [ce9f228e1fb9]
	I0920 10:53:25.053521    9094 logs.go:123] Gathering logs for etcd [97e934094d19] ...
	I0920 10:53:25.053525    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97e934094d19"
	I0920 10:53:25.067628    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:53:25.067638    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:53:25.080861    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:53:25.080874    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:53:25.115278    9094 logs.go:123] Gathering logs for kube-apiserver [9e153969f1f5] ...
	I0920 10:53:25.115287    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e153969f1f5"
	I0920 10:53:25.129181    9094 logs.go:123] Gathering logs for kube-apiserver [e7cf43c8a211] ...
	I0920 10:53:25.129190    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7cf43c8a211"
	I0920 10:53:25.166994    9094 logs.go:123] Gathering logs for storage-provisioner [ce9f228e1fb9] ...
	I0920 10:53:25.167005    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce9f228e1fb9"
	I0920 10:53:25.179833    9094 logs.go:123] Gathering logs for etcd [07e2780d69fa] ...
	I0920 10:53:25.179844    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07e2780d69fa"
	I0920 10:53:25.194398    9094 logs.go:123] Gathering logs for coredns [a59f98fbd24a] ...
	I0920 10:53:25.194409    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a59f98fbd24a"
	I0920 10:53:25.205866    9094 logs.go:123] Gathering logs for kube-scheduler [cd1e5a8150d3] ...
	I0920 10:53:25.205878    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1e5a8150d3"
	I0920 10:53:25.220645    9094 logs.go:123] Gathering logs for kube-controller-manager [9b57117e8fc7] ...
	I0920 10:53:25.220658    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b57117e8fc7"
	I0920 10:53:25.238002    9094 logs.go:123] Gathering logs for kube-controller-manager [0f1f6ae5b381] ...
	I0920 10:53:25.238015    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f1f6ae5b381"
	I0920 10:53:25.252069    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:53:25.252082    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:53:25.275559    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:53:25.275567    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:53:25.315311    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:53:25.315325    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:53:25.320135    9094 logs.go:123] Gathering logs for kube-proxy [a29a2a58ab03] ...
	I0920 10:53:25.320142    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29a2a58ab03"
	I0920 10:53:25.331417    9094 logs.go:123] Gathering logs for kube-scheduler [0184d4d42752] ...
	I0920 10:53:25.331427    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0184d4d42752"
	I0920 10:53:27.848751    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:32.851313    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:32.851386    9094 kubeadm.go:597] duration metric: took 4m3.805727625s to restartPrimaryControlPlane
	W0920 10:53:32.851453    9094 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 10:53:32.851479    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0920 10:53:33.899697    9094 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.048223041s)
	I0920 10:53:33.899772    9094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 10:53:33.904872    9094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 10:53:33.907950    9094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 10:53:33.910786    9094 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 10:53:33.910793    9094 kubeadm.go:157] found existing configuration files:
	
	I0920 10:53:33.910822    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/admin.conf
	I0920 10:53:33.913297    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 10:53:33.913328    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 10:53:33.915852    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/kubelet.conf
	I0920 10:53:33.918880    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 10:53:33.918901    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 10:53:33.921490    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/controller-manager.conf
	I0920 10:53:33.924037    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 10:53:33.924061    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 10:53:33.927182    9094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/scheduler.conf
	I0920 10:53:33.930042    9094 kubeadm.go:163] "https://control-plane.minikube.internal:51545" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51545 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 10:53:33.930070    9094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 10:53:33.932557    9094 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 10:53:33.950603    9094 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0920 10:53:33.950699    9094 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 10:53:33.999288    9094 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 10:53:33.999343    9094 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 10:53:33.999401    9094 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 10:53:34.052546    9094 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 10:53:34.060674    9094 out.go:235]   - Generating certificates and keys ...
	I0920 10:53:34.060710    9094 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 10:53:34.060745    9094 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 10:53:34.060781    9094 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 10:53:34.060848    9094 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 10:53:34.060884    9094 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 10:53:34.060914    9094 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 10:53:34.060950    9094 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 10:53:34.060986    9094 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 10:53:34.061032    9094 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 10:53:34.061075    9094 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 10:53:34.061097    9094 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 10:53:34.061126    9094 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 10:53:34.086765    9094 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 10:53:34.174692    9094 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 10:53:34.244390    9094 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 10:53:34.427178    9094 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 10:53:34.458171    9094 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 10:53:34.458545    9094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 10:53:34.458576    9094 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 10:53:34.545176    9094 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 10:53:34.549380    9094 out.go:235]   - Booting up control plane ...
	I0920 10:53:34.549426    9094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 10:53:34.549472    9094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 10:53:34.549532    9094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 10:53:34.549576    9094 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 10:53:34.549688    9094 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 10:53:39.051053    9094 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502493 seconds
	I0920 10:53:39.051127    9094 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 10:53:39.054958    9094 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 10:53:39.566393    9094 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 10:53:39.566808    9094 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-770000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 10:53:40.070038    9094 kubeadm.go:310] [bootstrap-token] Using token: oamarz.9okfcddbvqluxbug
	I0920 10:53:40.076675    9094 out.go:235]   - Configuring RBAC rules ...
	I0920 10:53:40.076747    9094 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 10:53:40.076803    9094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 10:53:40.078791    9094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 10:53:40.083068    9094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 10:53:40.084011    9094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 10:53:40.084784    9094 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 10:53:40.088351    9094 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 10:53:40.235955    9094 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 10:53:40.475966    9094 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 10:53:40.476472    9094 kubeadm.go:310] 
	I0920 10:53:40.476514    9094 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 10:53:40.476559    9094 kubeadm.go:310] 
	I0920 10:53:40.476702    9094 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 10:53:40.476733    9094 kubeadm.go:310] 
	I0920 10:53:40.476749    9094 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 10:53:40.476781    9094 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 10:53:40.476813    9094 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 10:53:40.476818    9094 kubeadm.go:310] 
	I0920 10:53:40.476849    9094 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 10:53:40.476852    9094 kubeadm.go:310] 
	I0920 10:53:40.476874    9094 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 10:53:40.476879    9094 kubeadm.go:310] 
	I0920 10:53:40.476905    9094 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 10:53:40.476954    9094 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 10:53:40.476999    9094 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 10:53:40.477004    9094 kubeadm.go:310] 
	I0920 10:53:40.477052    9094 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 10:53:40.477093    9094 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 10:53:40.477097    9094 kubeadm.go:310] 
	I0920 10:53:40.477145    9094 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oamarz.9okfcddbvqluxbug \
	I0920 10:53:40.477203    9094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:060a9df3d803721427aee4d9db182572971f8fddfdaccc18183246a007d5e636 \
	I0920 10:53:40.477214    9094 kubeadm.go:310] 	--control-plane 
	I0920 10:53:40.477219    9094 kubeadm.go:310] 
	I0920 10:53:40.477265    9094 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 10:53:40.477270    9094 kubeadm.go:310] 
	I0920 10:53:40.477312    9094 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oamarz.9okfcddbvqluxbug \
	I0920 10:53:40.477371    9094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:060a9df3d803721427aee4d9db182572971f8fddfdaccc18183246a007d5e636 
	I0920 10:53:40.477525    9094 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 10:53:40.477534    9094 cni.go:84] Creating CNI manager for ""
	I0920 10:53:40.477542    9094 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:53:40.481106    9094 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 10:53:40.489128    9094 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 10:53:40.492068    9094 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 10:53:40.496725    9094 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 10:53:40.496810    9094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 10:53:40.496812    9094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-770000 minikube.k8s.io/updated_at=2024_09_20T10_53_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=stopped-upgrade-770000 minikube.k8s.io/primary=true
	I0920 10:53:40.537852    9094 ops.go:34] apiserver oom_adj: -16
	I0920 10:53:40.537866    9094 kubeadm.go:1113] duration metric: took 41.083333ms to wait for elevateKubeSystemPrivileges
	I0920 10:53:40.537872    9094 kubeadm.go:394] duration metric: took 4m11.506875208s to StartCluster
	I0920 10:53:40.537881    9094 settings.go:142] acquiring lock: {Name:mk90c7bb0a96d07865bd05b5bab2437d4acfe4d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:53:40.537974    9094 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:53:40.538416    9094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/kubeconfig: {Name:mkc202c0538e947b3e0d9844748996d0c112bf36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:53:40.538631    9094 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:53:40.538639    9094 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 10:53:40.538700    9094 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-770000"
	I0920 10:53:40.538707    9094 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-770000"
	W0920 10:53:40.538712    9094 addons.go:243] addon storage-provisioner should already be in state true
	I0920 10:53:40.538727    9094 host.go:66] Checking if "stopped-upgrade-770000" exists ...
	I0920 10:53:40.538727    9094 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-770000"
	I0920 10:53:40.538728    9094 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:53:40.538735    9094 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-770000"
	I0920 10:53:40.543086    9094 out.go:177] * Verifying Kubernetes components...
	I0920 10:53:40.543784    9094 kapi.go:59] client config for stopped-upgrade-770000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/stopped-upgrade-770000/client.key", CAFile:"/Users/jenkins/minikube-integration/19679-6783/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1026aa030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 10:53:40.546451    9094 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-770000"
	W0920 10:53:40.546456    9094 addons.go:243] addon default-storageclass should already be in state true
	I0920 10:53:40.546465    9094 host.go:66] Checking if "stopped-upgrade-770000" exists ...
	I0920 10:53:40.547043    9094 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 10:53:40.547050    9094 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 10:53:40.547055    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	I0920 10:53:40.552027    9094 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 10:53:40.556114    9094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 10:53:40.559095    9094 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:53:40.559102    9094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 10:53:40.559110    9094 sshutil.go:53] new ssh client: &{IP:localhost Port:51511 SSHKeyPath:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/stopped-upgrade-770000/id_rsa Username:docker}
	I0920 10:53:40.632899    9094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 10:53:40.637950    9094 api_server.go:52] waiting for apiserver process to appear ...
	I0920 10:53:40.637998    9094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 10:53:40.641731    9094 api_server.go:72] duration metric: took 103.087875ms to wait for apiserver process to appear ...
	I0920 10:53:40.641738    9094 api_server.go:88] waiting for apiserver healthz status ...
	I0920 10:53:40.641745    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:40.665281    9094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 10:53:40.682606    9094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 10:53:41.035835    9094 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 10:53:41.035848    9094 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 10:53:45.643813    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:45.643868    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:50.644237    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:50.644279    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:53:55.644659    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:53:55.644683    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:00.645145    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:00.645207    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:05.645860    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:05.645905    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:10.646722    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:10.646792    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0920 10:54:11.037999    9094 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0920 10:54:11.042327    9094 out.go:177] * Enabled addons: storage-provisioner
	I0920 10:54:11.054264    9094 addons.go:510] duration metric: took 30.515940583s for enable addons: enabled=[storage-provisioner]
	I0920 10:54:15.647905    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:15.647947    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:20.649451    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:20.649481    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:25.649825    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:25.649876    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:30.651906    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:30.651944    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:35.654233    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:35.654284    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:40.654937    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:40.655418    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:40.687505    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:54:40.687657    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:40.725102    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:54:40.725206    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:40.745777    9094 logs.go:276] 2 containers: [cbfdabeebe6c 10e4a5674017]
	I0920 10:54:40.745871    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:40.762050    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:54:40.762134    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:40.773946    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:54:40.774019    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:40.784379    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:54:40.784457    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:40.794803    9094 logs.go:276] 0 containers: []
	W0920 10:54:40.794827    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:40.794885    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:40.805286    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:54:40.805299    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:54:40.805305    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:40.816760    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:40.816774    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:40.852251    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:54:40.852264    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:54:40.872507    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:54:40.872519    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:54:40.884837    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:54:40.884848    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:54:40.896169    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:54:40.896178    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:54:40.914371    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:54:40.914381    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:54:40.931492    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:40.931503    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:40.955083    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:40.955089    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:40.990065    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:40.990073    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:40.995575    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:54:40.995582    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:54:41.010496    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:54:41.010504    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:54:41.022160    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:54:41.022170    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:54:43.542193    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:48.545033    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:48.545570    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:48.586237    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:54:48.586387    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:48.608649    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:54:48.608772    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:48.625519    9094 logs.go:276] 2 containers: [cbfdabeebe6c 10e4a5674017]
	I0920 10:54:48.625616    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:48.638256    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:54:48.638336    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:48.649251    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:54:48.649343    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:48.660881    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:54:48.660959    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:48.672340    9094 logs.go:276] 0 containers: []
	W0920 10:54:48.672351    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:48.672414    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:48.683361    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:54:48.683374    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:54:48.683379    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:54:48.695500    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:54:48.695510    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:54:48.707555    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:54:48.707565    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:54:48.722421    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:54:48.722431    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:54:48.734714    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:48.734727    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:48.772227    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:48.772236    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:48.776426    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:54:48.776435    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:54:48.797412    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:54:48.797422    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:54:48.813662    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:54:48.813673    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:54:48.832791    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:48.832801    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:48.871679    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:54:48.871688    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:54:48.884495    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:48.884504    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:48.908715    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:54:48.908721    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:51.423289    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:54:56.426210    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:54:56.426798    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:54:56.465834    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:54:56.466001    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:54:56.488912    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:54:56.489052    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:54:56.504535    9094 logs.go:276] 2 containers: [cbfdabeebe6c 10e4a5674017]
	I0920 10:54:56.504634    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:54:56.517104    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:54:56.517189    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:54:56.528202    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:54:56.528282    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:54:56.539440    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:54:56.539523    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:54:56.550966    9094 logs.go:276] 0 containers: []
	W0920 10:54:56.550979    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:54:56.551051    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:54:56.569088    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:54:56.569106    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:54:56.569112    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:54:56.583905    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:54:56.583919    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:54:56.599023    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:54:56.599034    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:54:56.611445    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:54:56.611453    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:54:56.629594    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:54:56.629604    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:54:56.655913    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:54:56.655924    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:54:56.667773    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:54:56.667786    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:54:56.682322    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:54:56.682331    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:54:56.719468    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:54:56.719477    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:54:56.724432    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:54:56.724440    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:54:56.763541    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:54:56.763551    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:54:56.775812    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:54:56.775821    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:54:56.790172    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:54:56.790182    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:54:59.303992    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:04.306683    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:04.307198    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:04.353069    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:55:04.353237    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:04.373969    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:55:04.374072    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:04.389059    9094 logs.go:276] 2 containers: [cbfdabeebe6c 10e4a5674017]
	I0920 10:55:04.389142    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:04.401420    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:55:04.401498    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:04.412593    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:55:04.412679    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:04.423458    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:55:04.423547    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:04.434410    9094 logs.go:276] 0 containers: []
	W0920 10:55:04.434422    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:04.434493    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:04.445980    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:55:04.445995    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:04.446002    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:04.486888    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:55:04.486903    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:55:04.502098    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:55:04.502111    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:55:04.516806    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:04.516817    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:04.541665    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:55:04.541672    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:04.562799    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:55:04.562810    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:55:04.582957    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:55:04.582968    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:55:04.595347    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:04.595357    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:04.633917    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:04.633924    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:04.638115    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:55:04.638121    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:55:04.651149    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:55:04.651158    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:55:04.664312    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:55:04.664328    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:55:04.679290    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:55:04.679303    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:55:07.194116    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:12.196502    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:12.197118    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:12.234669    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:55:12.234826    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:12.255348    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:55:12.255463    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:12.271037    9094 logs.go:276] 2 containers: [cbfdabeebe6c 10e4a5674017]
	I0920 10:55:12.271124    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:12.283603    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:55:12.283685    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:12.294855    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:55:12.294938    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:12.306926    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:55:12.306998    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:12.328365    9094 logs.go:276] 0 containers: []
	W0920 10:55:12.328379    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:12.328450    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:12.342904    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:55:12.342920    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:12.342927    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:12.347455    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:55:12.347465    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:55:12.365385    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:55:12.365395    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:55:12.377436    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:55:12.377447    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:55:12.395626    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:55:12.395637    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:55:12.407805    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:12.407816    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:12.432513    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:12.432521    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:12.468906    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:55:12.468913    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:55:12.483648    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:55:12.483657    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:55:12.497849    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:55:12.497859    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:55:12.512884    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:55:12.512894    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:55:12.525537    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:55:12.525547    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:12.537851    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:12.537868    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:15.099913    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:20.102619    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:20.103077    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:20.147080    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:55:20.147221    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:20.168008    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:55:20.168113    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:20.183068    9094 logs.go:276] 2 containers: [cbfdabeebe6c 10e4a5674017]
	I0920 10:55:20.183148    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:20.196253    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:55:20.196323    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:20.208140    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:55:20.208205    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:20.219681    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:55:20.219747    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:20.234179    9094 logs.go:276] 0 containers: []
	W0920 10:55:20.234191    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:20.234265    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:20.245681    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:55:20.245701    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:55:20.245707    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:55:20.261241    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:55:20.261252    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:55:20.276392    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:55:20.276402    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:55:20.288593    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:20.288603    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:20.324962    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:55:20.324979    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:55:20.340527    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:55:20.340537    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:55:20.355604    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:55:20.355613    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:55:20.374404    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:55:20.374414    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:55:20.387270    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:20.387280    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:20.412081    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:55:20.412089    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:20.424061    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:20.424072    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:20.461028    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:20.461036    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:20.465050    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:55:20.465058    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:55:22.978130    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:27.980900    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:27.981483    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:28.023071    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:55:28.023222    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:28.046247    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:55:28.046374    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:28.062028    9094 logs.go:276] 2 containers: [cbfdabeebe6c 10e4a5674017]
	I0920 10:55:28.062130    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:28.075013    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:55:28.075085    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:28.086421    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:55:28.086508    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:28.098896    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:55:28.098968    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:28.110148    9094 logs.go:276] 0 containers: []
	W0920 10:55:28.110163    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:28.110233    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:28.122247    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:55:28.122262    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:55:28.122267    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:55:28.134977    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:55:28.134990    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:55:28.153285    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:55:28.153295    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:28.165422    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:28.165436    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:28.200512    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:55:28.200519    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:55:28.224716    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:55:28.224727    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:55:28.239994    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:55:28.240004    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:55:28.252157    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:55:28.252167    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:55:28.265228    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:55:28.265245    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:55:28.280713    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:55:28.280723    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:55:28.293181    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:28.293192    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:28.317607    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:28.317614    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:28.321862    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:28.321869    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:30.861636    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:35.864391    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:35.864877    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:35.904606    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:55:35.904767    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:35.926917    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:55:35.927064    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:35.944148    9094 logs.go:276] 2 containers: [cbfdabeebe6c 10e4a5674017]
	I0920 10:55:35.944238    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:35.956494    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:55:35.956577    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:35.972124    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:55:35.972194    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:35.982749    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:55:35.982833    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:35.993339    9094 logs.go:276] 0 containers: []
	W0920 10:55:35.993351    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:35.993425    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:36.003791    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:55:36.003810    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:36.003816    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:36.008106    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:36.008115    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:36.041623    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:55:36.041637    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:55:36.056252    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:55:36.056266    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:55:36.073848    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:55:36.073861    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:55:36.085359    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:36.085371    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:36.109388    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:55:36.109397    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:36.121933    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:36.121947    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:36.157274    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:55:36.157284    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:55:36.170966    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:55:36.170976    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:55:36.182868    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:55:36.182879    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:55:36.194731    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:55:36.194742    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:55:36.209668    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:55:36.209678    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:55:38.723514    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:43.726180    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:43.726573    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:43.754468    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:55:43.754596    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:43.774819    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:55:43.774926    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:43.788354    9094 logs.go:276] 2 containers: [cbfdabeebe6c 10e4a5674017]
	I0920 10:55:43.788444    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:43.799535    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:55:43.799621    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:43.810241    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:55:43.810320    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:43.820264    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:55:43.820339    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:43.830605    9094 logs.go:276] 0 containers: []
	W0920 10:55:43.830617    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:43.830686    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:43.841133    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:55:43.841149    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:55:43.841154    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:55:43.855190    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:55:43.855201    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:55:43.866808    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:55:43.866820    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:55:43.883678    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:55:43.883687    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:55:43.898030    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:55:43.898041    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:55:43.914655    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:43.914664    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:43.939126    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:43.939136    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:43.973339    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:43.973350    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:43.977766    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:55:43.977773    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:55:43.991540    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:55:43.991549    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:55:44.003140    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:55:44.003157    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:55:44.015069    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:55:44.015079    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:44.029550    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:44.029561    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:46.571591    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:51.574260    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:51.574579    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:51.601860    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:55:51.601991    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:51.619376    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:55:51.619474    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:51.632851    9094 logs.go:276] 2 containers: [cbfdabeebe6c 10e4a5674017]
	I0920 10:55:51.632924    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:51.643929    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:55:51.644011    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:51.655588    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:55:51.655676    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:51.681656    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:55:51.681744    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:51.706333    9094 logs.go:276] 0 containers: []
	W0920 10:55:51.706346    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:51.706413    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:51.741758    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:55:51.741776    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:51.741783    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:51.821011    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:55:51.821027    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:55:51.839753    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:55:51.839768    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:55:51.854549    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:55:51.854559    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:55:51.866926    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:51.866937    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:51.893244    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:51.893254    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:51.897384    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:55:51.897390    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:55:51.912190    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:55:51.912201    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:55:51.923804    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:55:51.923813    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:55:51.935703    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:55:51.935714    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:55:51.953860    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:55:51.953870    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:55:51.970326    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:55:51.970337    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:51.982223    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:51.982235    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:54.521357    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:55:59.523911    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:55:59.524032    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:55:59.536547    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:55:59.536638    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:55:59.549372    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:55:59.549465    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:55:59.562420    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:55:59.562514    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:55:59.580698    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:55:59.580771    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:55:59.591648    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:55:59.591726    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:55:59.610472    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:55:59.610546    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:55:59.621883    9094 logs.go:276] 0 containers: []
	W0920 10:55:59.621897    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:55:59.621963    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:55:59.632114    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:55:59.632130    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:55:59.632135    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:55:59.668783    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:55:59.668796    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:55:59.683131    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:55:59.683145    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:55:59.694667    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:55:59.694681    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:55:59.708493    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:55:59.708506    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:55:59.719770    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:55:59.719780    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:55:59.735122    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:55:59.735133    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:55:59.770975    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:55:59.770985    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:55:59.782557    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:55:59.782569    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:55:59.794295    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:55:59.794306    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:55:59.819323    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:55:59.819333    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:55:59.831378    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:55:59.831389    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:55:59.835642    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:55:59.835648    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:55:59.850517    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:55:59.850527    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:55:59.868106    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:55:59.868120    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:56:02.381857    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:07.384627    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:07.385140    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:07.426092    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:56:07.426242    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:07.447655    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:56:07.447793    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:07.464241    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:56:07.464333    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:07.478764    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:56:07.478839    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:07.489402    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:56:07.489473    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:07.500189    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:56:07.500261    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:07.509993    9094 logs.go:276] 0 containers: []
	W0920 10:56:07.510005    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:07.510068    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:07.520303    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:56:07.520320    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:07.520326    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:07.557107    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:56:07.557115    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:07.573486    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:56:07.573500    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:56:07.589192    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:56:07.589202    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:56:07.601229    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:56:07.601240    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:56:07.615986    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:56:07.615997    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:56:07.627427    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:56:07.627438    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:56:07.639304    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:56:07.639315    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:56:07.655061    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:56:07.655074    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:56:07.674430    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:07.674444    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:07.698921    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:07.698931    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:07.703700    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:07.703709    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:07.749893    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:56:07.749908    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:56:07.764824    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:56:07.764838    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:56:07.776209    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:56:07.776224    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:56:10.289978    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:15.292212    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:15.292823    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:15.335430    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:56:15.335594    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:15.358291    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:56:15.358429    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:15.375766    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:56:15.375860    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:15.387655    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:56:15.387736    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:15.399159    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:56:15.399241    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:15.409954    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:56:15.410032    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:15.424497    9094 logs.go:276] 0 containers: []
	W0920 10:56:15.424511    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:15.424575    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:15.435318    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:56:15.435338    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:56:15.435342    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:56:15.447099    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:56:15.447109    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:56:15.462590    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:56:15.462601    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:56:15.475085    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:56:15.475095    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:56:15.487073    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:56:15.487082    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:56:15.504937    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:56:15.504949    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:56:15.518433    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:56:15.518444    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:56:15.530515    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:15.530526    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:15.535350    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:15.535359    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:15.569202    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:56:15.569213    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:56:15.584180    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:56:15.584188    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:56:15.595238    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:56:15.595249    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:56:15.607200    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:15.607213    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:15.633139    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:56:15.633145    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:15.644959    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:15.644969    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:18.182976    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:23.184452    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:23.184520    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:23.196635    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:56:23.196704    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:23.217237    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:56:23.217291    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:23.228105    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:56:23.228174    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:23.242553    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:56:23.242619    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:23.254893    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:56:23.254957    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:23.265882    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:56:23.265953    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:23.280671    9094 logs.go:276] 0 containers: []
	W0920 10:56:23.280682    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:23.280734    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:23.292116    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:56:23.292132    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:23.292138    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:23.317620    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:56:23.317632    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:23.330568    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:56:23.330578    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:56:23.348078    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:56:23.348087    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:56:23.362607    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:56:23.362618    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:56:23.375486    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:56:23.375497    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:56:23.389175    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:56:23.389187    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:56:23.404346    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:23.404356    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:23.442146    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:56:23.442155    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:56:23.454791    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:56:23.454804    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:56:23.474252    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:23.474267    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:23.478639    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:56:23.478650    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:56:23.497114    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:23.497127    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:23.533782    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:56:23.533792    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:56:23.546779    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:56:23.546790    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:56:26.069638    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:31.072387    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:31.073028    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:31.113541    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:56:31.113697    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:31.137162    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:56:31.137297    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:31.153383    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:56:31.153478    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:31.165460    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:56:31.165532    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:31.176085    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:56:31.176165    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:31.186999    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:56:31.187072    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:31.198476    9094 logs.go:276] 0 containers: []
	W0920 10:56:31.198488    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:31.198560    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:31.210916    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:56:31.210936    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:31.210942    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:31.249979    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:56:31.249994    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:56:31.269216    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:31.269233    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:31.308895    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:56:31.308924    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:56:31.323211    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:56:31.323225    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:56:31.342831    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:31.342852    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:31.368247    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:56:31.368260    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:56:31.380538    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:56:31.380552    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:56:31.392275    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:56:31.392290    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:56:31.403613    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:56:31.403626    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:31.419642    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:31.419651    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:31.424245    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:56:31.424254    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:56:31.444178    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:56:31.444187    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:56:31.457900    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:56:31.457914    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:56:31.469113    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:56:31.469123    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:56:33.986149    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:38.988421    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:38.988847    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:39.020125    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:56:39.020270    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:39.039352    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:56:39.039448    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:39.053610    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:56:39.053705    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:39.065380    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:56:39.065466    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:39.076290    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:56:39.076364    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:39.087253    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:56:39.087336    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:39.101262    9094 logs.go:276] 0 containers: []
	W0920 10:56:39.101278    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:39.101353    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:39.112319    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:56:39.112338    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:39.112344    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:39.116759    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:56:39.116767    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:56:39.128966    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:56:39.128977    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:56:39.143744    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:56:39.143757    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:56:39.157086    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:56:39.157096    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:56:39.168863    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:39.168873    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:39.193260    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:56:39.193266    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:56:39.207684    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:56:39.207700    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:56:39.222197    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:56:39.222207    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:39.235445    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:39.235457    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:39.270122    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:56:39.270132    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:56:39.281803    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:56:39.281816    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:56:39.300579    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:39.300589    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:39.336019    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:56:39.336028    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:56:39.347573    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:56:39.347583    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:56:41.861382    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:46.862816    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:46.862912    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:46.881347    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:56:46.881408    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:46.893134    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:56:46.893203    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:46.904769    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:56:46.904844    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:46.916583    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:56:46.916654    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:46.931135    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:56:46.931225    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:46.943611    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:56:46.943670    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:46.953974    9094 logs.go:276] 0 containers: []
	W0920 10:56:46.953986    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:46.954049    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:46.966837    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:56:46.966854    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:56:46.966860    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:56:46.980204    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:56:46.980217    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:56:46.994299    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:46.994307    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:47.031406    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:56:47.031421    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:56:47.043917    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:56:47.043928    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:56:47.056525    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:56:47.056541    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:56:47.074725    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:47.074737    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:47.114226    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:56:47.114244    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:56:47.130752    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:56:47.130761    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:56:47.145954    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:56:47.145970    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:47.158675    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:47.158686    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:47.164602    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:56:47.164615    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:56:47.186477    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:56:47.186489    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:56:47.198997    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:56:47.199009    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:56:47.212699    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:47.212714    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:49.739907    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:56:54.742465    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:56:54.743090    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:56:54.787188    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:56:54.787355    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:56:54.808860    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:56:54.809002    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:56:54.823728    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:56:54.823828    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:56:54.836179    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:56:54.836266    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:56:54.847006    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:56:54.847092    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:56:54.858046    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:56:54.858131    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:56:54.868476    9094 logs.go:276] 0 containers: []
	W0920 10:56:54.868490    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:56:54.868552    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:56:54.878989    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:56:54.879007    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:56:54.879013    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:56:54.890926    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:56:54.890937    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:56:54.908352    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:56:54.908362    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:56:54.945285    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:56:54.945297    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:56:54.957268    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:56:54.957278    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:56:54.971718    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:56:54.971728    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:56:54.983300    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:56:54.983314    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:56:55.008061    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:56:55.008067    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:56:55.025061    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:56:55.025072    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:56:55.036367    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:56:55.036377    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:56:55.048238    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:56:55.048249    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:56:55.052571    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:56:55.052576    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:56:55.102218    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:56:55.102229    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:56:55.120397    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:56:55.120408    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:56:55.135707    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:56:55.135717    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:56:57.649039    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:02.650603    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:02.651152    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:02.688785    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:57:02.688937    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:02.710138    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:57:02.710266    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:02.725708    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:57:02.725799    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:02.738213    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:57:02.738285    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:02.750005    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:57:02.750087    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:02.761817    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:57:02.761901    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:02.771943    9094 logs.go:276] 0 containers: []
	W0920 10:57:02.771953    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:02.772019    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:02.783040    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:57:02.783055    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:57:02.783060    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:57:02.797477    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:57:02.797486    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:57:02.809390    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:57:02.809402    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:57:02.821323    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:57:02.821332    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:57:02.833422    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:57:02.833431    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:57:02.844775    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:57:02.844786    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:57:02.856081    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:02.856092    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:02.879215    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:02.879225    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:02.883519    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:57:02.883524    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:57:02.898208    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:57:02.898219    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:57:02.916226    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:02.916238    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:02.953248    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:02.953258    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:02.989869    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:57:02.989879    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:57:03.011877    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:57:03.011887    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:57:03.023225    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:57:03.023235    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:05.537097    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:10.539540    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:10.539618    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:10.551898    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:57:10.551967    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:10.563544    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:57:10.563605    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:10.578675    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:57:10.578742    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:10.590265    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:57:10.590344    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:10.602545    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:57:10.602603    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:10.612817    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:57:10.612892    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:10.623944    9094 logs.go:276] 0 containers: []
	W0920 10:57:10.623954    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:10.624016    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:10.637769    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:57:10.637785    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:10.637790    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:10.675045    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:57:10.675061    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:57:10.688496    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:57:10.688505    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:57:10.701834    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:57:10.701851    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:57:10.717210    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:57:10.717222    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:57:10.732334    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:10.732348    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:10.758016    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:57:10.758034    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:57:10.773860    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:57:10.773876    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:57:10.788827    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:57:10.788837    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:57:10.807251    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:57:10.807263    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:10.819706    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:10.819718    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:10.824761    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:10.824772    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:10.861600    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:57:10.861609    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:57:10.874256    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:57:10.874268    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:57:10.888687    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:57:10.888702    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:57:13.409513    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:18.412268    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:18.412506    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:18.430909    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:57:18.431006    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:18.444703    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:57:18.444791    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:18.456393    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:57:18.456483    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:18.467405    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:57:18.467489    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:18.477746    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:57:18.477826    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:18.488056    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:57:18.488138    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:18.499256    9094 logs.go:276] 0 containers: []
	W0920 10:57:18.499268    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:18.499340    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:18.509410    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:57:18.509429    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:57:18.509434    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:57:18.521530    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:57:18.521541    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:57:18.536399    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:57:18.536410    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:57:18.547863    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:57:18.547874    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:57:18.562951    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:57:18.562961    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:57:18.576605    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:57:18.576620    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:57:18.587793    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:57:18.587806    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:57:18.599721    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:18.599734    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:18.637271    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:18.637279    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:18.671475    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:57:18.671485    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:18.684344    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:18.684357    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:18.707641    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:18.707648    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:18.711813    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:57:18.711818    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:57:18.724966    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:57:18.724977    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:57:18.736380    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:57:18.736389    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:57:21.255698    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:26.258445    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:26.259002    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:26.300193    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:57:26.300352    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:26.322858    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:57:26.322957    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:26.345763    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:57:26.345850    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:26.357002    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:57:26.357087    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:26.367498    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:57:26.367575    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:26.378210    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:57:26.378287    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:26.388461    9094 logs.go:276] 0 containers: []
	W0920 10:57:26.388473    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:26.388544    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:26.399597    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:57:26.399617    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:26.399622    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:26.434296    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:57:26.434304    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:57:26.445988    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:57:26.445998    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:57:26.457245    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:26.457255    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:26.480250    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:26.480256    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:26.484289    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:57:26.484297    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:57:26.496544    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:57:26.496554    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:57:26.510425    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:57:26.510438    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:57:26.527301    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:57:26.527311    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:57:26.538753    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:57:26.538769    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:26.550360    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:26.550370    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:26.584820    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:57:26.584831    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:57:26.602920    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:57:26.602931    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:57:26.617249    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:57:26.617259    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:57:26.629649    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:57:26.629661    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:57:29.142227    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:34.142894    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:34.143007    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0920 10:57:34.154094    9094 logs.go:276] 1 containers: [59f073d73675]
	I0920 10:57:34.154175    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0920 10:57:34.165427    9094 logs.go:276] 1 containers: [60ae1745c459]
	I0920 10:57:34.165496    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0920 10:57:34.177683    9094 logs.go:276] 4 containers: [106c54e67848 82084e6503e4 cbfdabeebe6c 10e4a5674017]
	I0920 10:57:34.177760    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0920 10:57:34.188361    9094 logs.go:276] 1 containers: [1f479b6fe552]
	I0920 10:57:34.188424    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0920 10:57:34.199332    9094 logs.go:276] 1 containers: [bc7d20e7a34a]
	I0920 10:57:34.199435    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0920 10:57:34.210994    9094 logs.go:276] 1 containers: [4d2af4e5a03f]
	I0920 10:57:34.211073    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0920 10:57:34.222830    9094 logs.go:276] 0 containers: []
	W0920 10:57:34.222844    9094 logs.go:278] No container was found matching "kindnet"
	I0920 10:57:34.222908    9094 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0920 10:57:34.233496    9094 logs.go:276] 1 containers: [5f4dfc00b692]
	I0920 10:57:34.233511    9094 logs.go:123] Gathering logs for kube-scheduler [1f479b6fe552] ...
	I0920 10:57:34.233516    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f479b6fe552"
	I0920 10:57:34.248670    9094 logs.go:123] Gathering logs for storage-provisioner [5f4dfc00b692] ...
	I0920 10:57:34.248682    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f4dfc00b692"
	I0920 10:57:34.261424    9094 logs.go:123] Gathering logs for container status ...
	I0920 10:57:34.261436    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 10:57:34.274850    9094 logs.go:123] Gathering logs for coredns [10e4a5674017] ...
	I0920 10:57:34.274862    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e4a5674017"
	I0920 10:57:34.288752    9094 logs.go:123] Gathering logs for kube-apiserver [59f073d73675] ...
	I0920 10:57:34.288763    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59f073d73675"
	I0920 10:57:34.304403    9094 logs.go:123] Gathering logs for coredns [cbfdabeebe6c] ...
	I0920 10:57:34.304412    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbfdabeebe6c"
	I0920 10:57:34.317736    9094 logs.go:123] Gathering logs for kube-controller-manager [4d2af4e5a03f] ...
	I0920 10:57:34.317747    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d2af4e5a03f"
	I0920 10:57:34.342175    9094 logs.go:123] Gathering logs for kubelet ...
	I0920 10:57:34.342189    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 10:57:34.380066    9094 logs.go:123] Gathering logs for coredns [82084e6503e4] ...
	I0920 10:57:34.380085    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82084e6503e4"
	I0920 10:57:34.401906    9094 logs.go:123] Gathering logs for Docker ...
	I0920 10:57:34.401920    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0920 10:57:34.430192    9094 logs.go:123] Gathering logs for dmesg ...
	I0920 10:57:34.430213    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 10:57:34.435317    9094 logs.go:123] Gathering logs for etcd [60ae1745c459] ...
	I0920 10:57:34.435326    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60ae1745c459"
	I0920 10:57:34.449657    9094 logs.go:123] Gathering logs for coredns [106c54e67848] ...
	I0920 10:57:34.449669    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 106c54e67848"
	I0920 10:57:34.462892    9094 logs.go:123] Gathering logs for kube-proxy [bc7d20e7a34a] ...
	I0920 10:57:34.462904    9094 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc7d20e7a34a"
	I0920 10:57:34.475424    9094 logs.go:123] Gathering logs for describe nodes ...
	I0920 10:57:34.475436    9094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 10:57:37.015522    9094 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0920 10:57:42.018177    9094 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0920 10:57:42.027049    9094 out.go:201] 
	W0920 10:57:42.031823    9094 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0920 10:57:42.031840    9094 out.go:270] * 
	* 
	W0920 10:57:42.033453    9094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:57:42.048850    9094 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-770000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.15s)

                                                
                                    
x
+
TestPause/serial/Start (9.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-400000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-400000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.901067875s)

                                                
                                                
-- stdout --
	* [pause-400000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-400000" primary control-plane node in "pause-400000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-400000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-400000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-400000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-400000 -n pause-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-400000 -n pause-400000: exit status 7 (60.886167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 : exit status 80 (10.185963542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-040000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-040000" primary control-plane node in "NoKubernetes-040000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-040000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-040000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000: exit status 7 (68.08475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 : exit status 80 (5.248226833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-040000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-040000
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-040000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000: exit status 7 (50.448167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 : exit status 80 (5.270134625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-040000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-040000
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-040000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000: exit status 7 (31.362042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 : exit status 80 (5.269765s)

                                                
                                                
-- stdout --
	* [NoKubernetes-040000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-040000
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-040000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000: exit status 7 (63.055417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.901393417s)

                                                
                                                
-- stdout --
	* [auto-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-064000" primary control-plane node in "auto-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:55:47.180832    9294 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:55:47.180994    9294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:55:47.181000    9294 out.go:358] Setting ErrFile to fd 2...
	I0920 10:55:47.181002    9294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:55:47.181139    9294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:55:47.182213    9294 out.go:352] Setting JSON to false
	I0920 10:55:47.198621    9294 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6918,"bootTime":1726848029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:55:47.198700    9294 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:55:47.205163    9294 out.go:177] * [auto-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:55:47.213029    9294 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:55:47.213121    9294 notify.go:220] Checking for updates...
	I0920 10:55:47.219027    9294 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:55:47.221999    9294 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:55:47.226053    9294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:55:47.227541    9294 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:55:47.231007    9294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:55:47.234431    9294 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:55:47.234497    9294 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:55:47.234549    9294 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:55:47.238833    9294 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:55:47.246036    9294 start.go:297] selected driver: qemu2
	I0920 10:55:47.246043    9294 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:55:47.246056    9294 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:55:47.248302    9294 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:55:47.250965    9294 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:55:47.254102    9294 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:55:47.254135    9294 cni.go:84] Creating CNI manager for ""
	I0920 10:55:47.254169    9294 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:55:47.254174    9294 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:55:47.254203    9294 start.go:340] cluster config:
	{Name:auto-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:55:47.257568    9294 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:55:47.265997    9294 out.go:177] * Starting "auto-064000" primary control-plane node in "auto-064000" cluster
	I0920 10:55:47.270002    9294 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:55:47.270018    9294 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:55:47.270026    9294 cache.go:56] Caching tarball of preloaded images
	I0920 10:55:47.270087    9294 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:55:47.270092    9294 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:55:47.270162    9294 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/auto-064000/config.json ...
	I0920 10:55:47.270179    9294 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/auto-064000/config.json: {Name:mk19b22cda6efffbc679885702c2abba10c5da25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:55:47.270399    9294 start.go:360] acquireMachinesLock for auto-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:55:47.270435    9294 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "auto-064000"
	I0920 10:55:47.270447    9294 start.go:93] Provisioning new machine with config: &{Name:auto-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:55:47.270473    9294 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:55:47.278006    9294 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:55:47.294488    9294 start.go:159] libmachine.API.Create for "auto-064000" (driver="qemu2")
	I0920 10:55:47.294519    9294 client.go:168] LocalClient.Create starting
	I0920 10:55:47.294580    9294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:55:47.294610    9294 main.go:141] libmachine: Decoding PEM data...
	I0920 10:55:47.294620    9294 main.go:141] libmachine: Parsing certificate...
	I0920 10:55:47.294670    9294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:55:47.294693    9294 main.go:141] libmachine: Decoding PEM data...
	I0920 10:55:47.294701    9294 main.go:141] libmachine: Parsing certificate...
	I0920 10:55:47.295106    9294 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:55:47.458503    9294 main.go:141] libmachine: Creating SSH key...
	I0920 10:55:47.666906    9294 main.go:141] libmachine: Creating Disk image...
	I0920 10:55:47.666919    9294 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:55:47.667149    9294 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2
	I0920 10:55:47.676852    9294 main.go:141] libmachine: STDOUT: 
	I0920 10:55:47.676873    9294 main.go:141] libmachine: STDERR: 
	I0920 10:55:47.676936    9294 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2 +20000M
	I0920 10:55:47.685245    9294 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:55:47.685261    9294 main.go:141] libmachine: STDERR: 
	I0920 10:55:47.685276    9294 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2
	I0920 10:55:47.685282    9294 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:55:47.685295    9294 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:55:47.685321    9294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:f8:1d:85:01:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2
	I0920 10:55:47.686937    9294 main.go:141] libmachine: STDOUT: 
	I0920 10:55:47.686951    9294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:55:47.686969    9294 client.go:171] duration metric: took 392.444709ms to LocalClient.Create
	I0920 10:55:49.689173    9294 start.go:128] duration metric: took 2.41868375s to createHost
	I0920 10:55:49.689271    9294 start.go:83] releasing machines lock for "auto-064000", held for 2.418841208s
	W0920 10:55:49.689364    9294 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:55:49.704756    9294 out.go:177] * Deleting "auto-064000" in qemu2 ...
	W0920 10:55:49.738467    9294 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:55:49.738510    9294 start.go:729] Will try again in 5 seconds ...
	I0920 10:55:54.740601    9294 start.go:360] acquireMachinesLock for auto-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:55:54.740843    9294 start.go:364] duration metric: took 200.25µs to acquireMachinesLock for "auto-064000"
	I0920 10:55:54.740870    9294 start.go:93] Provisioning new machine with config: &{Name:auto-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:55:54.740973    9294 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:55:54.752290    9294 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:55:54.775718    9294 start.go:159] libmachine.API.Create for "auto-064000" (driver="qemu2")
	I0920 10:55:54.775754    9294 client.go:168] LocalClient.Create starting
	I0920 10:55:54.775835    9294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:55:54.775881    9294 main.go:141] libmachine: Decoding PEM data...
	I0920 10:55:54.775893    9294 main.go:141] libmachine: Parsing certificate...
	I0920 10:55:54.775949    9294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:55:54.775982    9294 main.go:141] libmachine: Decoding PEM data...
	I0920 10:55:54.775990    9294 main.go:141] libmachine: Parsing certificate...
	I0920 10:55:54.776352    9294 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:55:54.940270    9294 main.go:141] libmachine: Creating SSH key...
	I0920 10:55:54.988722    9294 main.go:141] libmachine: Creating Disk image...
	I0920 10:55:54.988728    9294 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:55:54.988946    9294 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2
	I0920 10:55:54.998082    9294 main.go:141] libmachine: STDOUT: 
	I0920 10:55:54.998096    9294 main.go:141] libmachine: STDERR: 
	I0920 10:55:54.998161    9294 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2 +20000M
	I0920 10:55:55.006224    9294 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:55:55.006239    9294 main.go:141] libmachine: STDERR: 
	I0920 10:55:55.006251    9294 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2
	I0920 10:55:55.006256    9294 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:55:55.006265    9294 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:55:55.006298    9294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c4:67:1a:0c:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/auto-064000/disk.qcow2
	I0920 10:55:55.008099    9294 main.go:141] libmachine: STDOUT: 
	I0920 10:55:55.008120    9294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:55:55.008131    9294 client.go:171] duration metric: took 232.37475ms to LocalClient.Create
	I0920 10:55:57.010335    9294 start.go:128] duration metric: took 2.269340375s to createHost
	I0920 10:55:57.010440    9294 start.go:83] releasing machines lock for "auto-064000", held for 2.269597166s
	W0920 10:55:57.010847    9294 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:55:57.020534    9294 out.go:201] 
	W0920 10:55:57.028492    9294 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:55:57.028530    9294 out.go:270] * 
	* 
	W0920 10:55:57.030972    9294 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:55:57.039535    9294 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.78757825s)

                                                
                                                
-- stdout --
	* [kindnet-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-064000" primary control-plane node in "kindnet-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:55:59.292846    9403 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:55:59.292994    9403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:55:59.292997    9403 out.go:358] Setting ErrFile to fd 2...
	I0920 10:55:59.293000    9403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:55:59.293123    9403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:55:59.294154    9403 out.go:352] Setting JSON to false
	I0920 10:55:59.310423    9403 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6930,"bootTime":1726848029,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:55:59.310496    9403 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:55:59.316675    9403 out.go:177] * [kindnet-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:55:59.324620    9403 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:55:59.324653    9403 notify.go:220] Checking for updates...
	I0920 10:55:59.332527    9403 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:55:59.335639    9403 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:55:59.339565    9403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:55:59.342618    9403 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:55:59.345637    9403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:55:59.348971    9403 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:55:59.349036    9403 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:55:59.349088    9403 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:55:59.353588    9403 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:55:59.360633    9403 start.go:297] selected driver: qemu2
	I0920 10:55:59.360642    9403 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:55:59.360649    9403 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:55:59.363123    9403 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:55:59.367622    9403 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:55:59.370755    9403 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:55:59.370776    9403 cni.go:84] Creating CNI manager for "kindnet"
	I0920 10:55:59.370780    9403 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 10:55:59.370824    9403 start.go:340] cluster config:
	{Name:kindnet-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:55:59.374578    9403 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:55:59.378592    9403 out.go:177] * Starting "kindnet-064000" primary control-plane node in "kindnet-064000" cluster
	I0920 10:55:59.386569    9403 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:55:59.386588    9403 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:55:59.386598    9403 cache.go:56] Caching tarball of preloaded images
	I0920 10:55:59.386671    9403 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:55:59.386677    9403 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:55:59.386767    9403 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/kindnet-064000/config.json ...
	I0920 10:55:59.386779    9403 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/kindnet-064000/config.json: {Name:mkdb9efd1ef1b19415510420d72c132c4b054a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:55:59.387082    9403 start.go:360] acquireMachinesLock for kindnet-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:55:59.387121    9403 start.go:364] duration metric: took 30.834µs to acquireMachinesLock for "kindnet-064000"
	I0920 10:55:59.387136    9403 start.go:93] Provisioning new machine with config: &{Name:kindnet-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:55:59.387172    9403 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:55:59.390612    9403 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:55:59.408022    9403 start.go:159] libmachine.API.Create for "kindnet-064000" (driver="qemu2")
	I0920 10:55:59.408056    9403 client.go:168] LocalClient.Create starting
	I0920 10:55:59.408126    9403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:55:59.408157    9403 main.go:141] libmachine: Decoding PEM data...
	I0920 10:55:59.408165    9403 main.go:141] libmachine: Parsing certificate...
	I0920 10:55:59.408201    9403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:55:59.408230    9403 main.go:141] libmachine: Decoding PEM data...
	I0920 10:55:59.408243    9403 main.go:141] libmachine: Parsing certificate...
	I0920 10:55:59.408602    9403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:55:59.572253    9403 main.go:141] libmachine: Creating SSH key...
	I0920 10:55:59.643947    9403 main.go:141] libmachine: Creating Disk image...
	I0920 10:55:59.643959    9403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:55:59.644240    9403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2
	I0920 10:55:59.653920    9403 main.go:141] libmachine: STDOUT: 
	I0920 10:55:59.653943    9403 main.go:141] libmachine: STDERR: 
	I0920 10:55:59.654006    9403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2 +20000M
	I0920 10:55:59.662200    9403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:55:59.662216    9403 main.go:141] libmachine: STDERR: 
	I0920 10:55:59.662245    9403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2
	I0920 10:55:59.662250    9403 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:55:59.662263    9403 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:55:59.662293    9403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:db:60:57:cd:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2
	I0920 10:55:59.663840    9403 main.go:141] libmachine: STDOUT: 
	I0920 10:55:59.663854    9403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:55:59.663874    9403 client.go:171] duration metric: took 255.8135ms to LocalClient.Create
	I0920 10:56:01.665323    9403 start.go:128] duration metric: took 2.278152417s to createHost
	I0920 10:56:01.665370    9403 start.go:83] releasing machines lock for "kindnet-064000", held for 2.278256708s
	W0920 10:56:01.665407    9403 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:01.674954    9403 out.go:177] * Deleting "kindnet-064000" in qemu2 ...
	W0920 10:56:01.691773    9403 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:01.691783    9403 start.go:729] Will try again in 5 seconds ...
	I0920 10:56:06.693908    9403 start.go:360] acquireMachinesLock for kindnet-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:56:06.694431    9403 start.go:364] duration metric: took 439.583µs to acquireMachinesLock for "kindnet-064000"
	I0920 10:56:06.694604    9403 start.go:93] Provisioning new machine with config: &{Name:kindnet-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:56:06.694883    9403 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:56:06.700495    9403 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:56:06.738283    9403 start.go:159] libmachine.API.Create for "kindnet-064000" (driver="qemu2")
	I0920 10:56:06.738337    9403 client.go:168] LocalClient.Create starting
	I0920 10:56:06.738471    9403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:56:06.738549    9403 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:06.738564    9403 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:06.738627    9403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:56:06.738667    9403 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:06.738680    9403 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:06.739203    9403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:56:06.907044    9403 main.go:141] libmachine: Creating SSH key...
	I0920 10:56:06.985546    9403 main.go:141] libmachine: Creating Disk image...
	I0920 10:56:06.985552    9403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:56:06.985767    9403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2
	I0920 10:56:06.995341    9403 main.go:141] libmachine: STDOUT: 
	I0920 10:56:06.995363    9403 main.go:141] libmachine: STDERR: 
	I0920 10:56:06.995421    9403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2 +20000M
	I0920 10:56:07.003354    9403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:56:07.003375    9403 main.go:141] libmachine: STDERR: 
	I0920 10:56:07.003386    9403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2
	I0920 10:56:07.003391    9403 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:56:07.003398    9403 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:56:07.003436    9403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:91:65:11:7f:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kindnet-064000/disk.qcow2
	I0920 10:56:07.005082    9403 main.go:141] libmachine: STDOUT: 
	I0920 10:56:07.005101    9403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:56:07.005115    9403 client.go:171] duration metric: took 266.774708ms to LocalClient.Create
	I0920 10:56:09.007268    9403 start.go:128] duration metric: took 2.312371583s to createHost
	I0920 10:56:09.007357    9403 start.go:83] releasing machines lock for "kindnet-064000", held for 2.312921959s
	W0920 10:56:09.007633    9403 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:09.020155    9403 out.go:201] 
	W0920 10:56:09.024301    9403 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:56:09.024328    9403 out.go:270] * 
	* 
	W0920 10:56:09.026023    9403 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:56:09.039256    9403 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.948247167s)

                                                
                                                
-- stdout --
	* [calico-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-064000" primary control-plane node in "calico-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:56:11.373967    9519 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:56:11.374095    9519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:56:11.374099    9519 out.go:358] Setting ErrFile to fd 2...
	I0920 10:56:11.374101    9519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:56:11.374223    9519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:56:11.375314    9519 out.go:352] Setting JSON to false
	I0920 10:56:11.391661    9519 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6942,"bootTime":1726848029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:56:11.391736    9519 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:56:11.397430    9519 out.go:177] * [calico-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:56:11.406307    9519 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:56:11.406344    9519 notify.go:220] Checking for updates...
	I0920 10:56:11.413256    9519 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:56:11.416288    9519 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:56:11.420224    9519 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:56:11.423239    9519 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:56:11.426266    9519 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:56:11.429477    9519 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:56:11.429540    9519 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:56:11.429588    9519 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:56:11.434196    9519 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:56:11.440112    9519 start.go:297] selected driver: qemu2
	I0920 10:56:11.440118    9519 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:56:11.440123    9519 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:56:11.442091    9519 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:56:11.445213    9519 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:56:11.448332    9519 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:56:11.448356    9519 cni.go:84] Creating CNI manager for "calico"
	I0920 10:56:11.448361    9519 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0920 10:56:11.448408    9519 start.go:340] cluster config:
	{Name:calico-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:56:11.451649    9519 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:56:11.459251    9519 out.go:177] * Starting "calico-064000" primary control-plane node in "calico-064000" cluster
	I0920 10:56:11.463269    9519 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:56:11.463285    9519 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:56:11.463295    9519 cache.go:56] Caching tarball of preloaded images
	I0920 10:56:11.463357    9519 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:56:11.463364    9519 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:56:11.463428    9519 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/calico-064000/config.json ...
	I0920 10:56:11.463439    9519 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/calico-064000/config.json: {Name:mkef1f870ae261364d380a398c1413e3008bcfdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:56:11.463649    9519 start.go:360] acquireMachinesLock for calico-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:56:11.463678    9519 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "calico-064000"
	I0920 10:56:11.463690    9519 start.go:93] Provisioning new machine with config: &{Name:calico-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:56:11.463712    9519 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:56:11.471222    9519 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:56:11.487051    9519 start.go:159] libmachine.API.Create for "calico-064000" (driver="qemu2")
	I0920 10:56:11.487080    9519 client.go:168] LocalClient.Create starting
	I0920 10:56:11.487144    9519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:56:11.487174    9519 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:11.487183    9519 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:11.487217    9519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:56:11.487247    9519 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:11.487257    9519 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:11.487658    9519 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:56:11.651427    9519 main.go:141] libmachine: Creating SSH key...
	I0920 10:56:11.865942    9519 main.go:141] libmachine: Creating Disk image...
	I0920 10:56:11.865953    9519 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:56:11.866193    9519 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2
	I0920 10:56:11.875744    9519 main.go:141] libmachine: STDOUT: 
	I0920 10:56:11.875764    9519 main.go:141] libmachine: STDERR: 
	I0920 10:56:11.875841    9519 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2 +20000M
	I0920 10:56:11.883961    9519 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:56:11.883978    9519 main.go:141] libmachine: STDERR: 
	I0920 10:56:11.883999    9519 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2
	I0920 10:56:11.884004    9519 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:56:11.884014    9519 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:56:11.884039    9519 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:e9:77:be:7f:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2
	I0920 10:56:11.885562    9519 main.go:141] libmachine: STDOUT: 
	I0920 10:56:11.885578    9519 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:56:11.885597    9519 client.go:171] duration metric: took 398.514667ms to LocalClient.Create
	I0920 10:56:13.887718    9519 start.go:128] duration metric: took 2.424008666s to createHost
	I0920 10:56:13.887750    9519 start.go:83] releasing machines lock for "calico-064000", held for 2.424082083s
	W0920 10:56:13.887786    9519 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:13.903887    9519 out.go:177] * Deleting "calico-064000" in qemu2 ...
	W0920 10:56:13.925682    9519 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:13.925694    9519 start.go:729] Will try again in 5 seconds ...
	I0920 10:56:18.927334    9519 start.go:360] acquireMachinesLock for calico-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:56:18.927532    9519 start.go:364] duration metric: took 154.083µs to acquireMachinesLock for "calico-064000"
	I0920 10:56:18.927558    9519 start.go:93] Provisioning new machine with config: &{Name:calico-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:56:18.927634    9519 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:56:18.939380    9519 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:56:18.964640    9519 start.go:159] libmachine.API.Create for "calico-064000" (driver="qemu2")
	I0920 10:56:18.964675    9519 client.go:168] LocalClient.Create starting
	I0920 10:56:18.964741    9519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:56:18.964783    9519 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:18.964795    9519 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:18.964836    9519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:56:18.964864    9519 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:18.964876    9519 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:18.965222    9519 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:56:19.131108    9519 main.go:141] libmachine: Creating SSH key...
	I0920 10:56:19.225727    9519 main.go:141] libmachine: Creating Disk image...
	I0920 10:56:19.225734    9519 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:56:19.225951    9519 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2
	I0920 10:56:19.235649    9519 main.go:141] libmachine: STDOUT: 
	I0920 10:56:19.235668    9519 main.go:141] libmachine: STDERR: 
	I0920 10:56:19.235721    9519 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2 +20000M
	I0920 10:56:19.243951    9519 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:56:19.243964    9519 main.go:141] libmachine: STDERR: 
	I0920 10:56:19.243979    9519 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2
	I0920 10:56:19.243990    9519 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:56:19.244002    9519 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:56:19.244047    9519 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:58:df:dc:45:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/calico-064000/disk.qcow2
	I0920 10:56:19.245764    9519 main.go:141] libmachine: STDOUT: 
	I0920 10:56:19.245778    9519 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:56:19.245791    9519 client.go:171] duration metric: took 281.113ms to LocalClient.Create
	I0920 10:56:21.247981    9519 start.go:128] duration metric: took 2.320329792s to createHost
	I0920 10:56:21.248086    9519 start.go:83] releasing machines lock for "calico-064000", held for 2.320555875s
	W0920 10:56:21.248566    9519 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:21.259209    9519 out.go:201] 
	W0920 10:56:21.267304    9519 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:56:21.267333    9519 out.go:270] * 
	* 
	W0920 10:56:21.270025    9519 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:56:21.279180    9519 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.755003959s)

                                                
                                                
-- stdout --
	* [custom-flannel-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-064000" primary control-plane node in "custom-flannel-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:56:23.761009    9639 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:56:23.761171    9639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:56:23.761179    9639 out.go:358] Setting ErrFile to fd 2...
	I0920 10:56:23.761181    9639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:56:23.761322    9639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:56:23.762618    9639 out.go:352] Setting JSON to false
	I0920 10:56:23.779465    9639 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6954,"bootTime":1726848029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:56:23.779556    9639 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:56:23.785285    9639 out.go:177] * [custom-flannel-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:56:23.792127    9639 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:56:23.792198    9639 notify.go:220] Checking for updates...
	I0920 10:56:23.800046    9639 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:56:23.803192    9639 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:56:23.806078    9639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:56:23.809132    9639 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:56:23.812118    9639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:56:23.815419    9639 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:56:23.815484    9639 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:56:23.815528    9639 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:56:23.820100    9639 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:56:23.826091    9639 start.go:297] selected driver: qemu2
	I0920 10:56:23.826099    9639 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:56:23.826106    9639 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:56:23.828409    9639 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:56:23.832067    9639 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:56:23.835229    9639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:56:23.835248    9639 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0920 10:56:23.835258    9639 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0920 10:56:23.835301    9639 start.go:340] cluster config:
	{Name:custom-flannel-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:56:23.838824    9639 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:56:23.846085    9639 out.go:177] * Starting "custom-flannel-064000" primary control-plane node in "custom-flannel-064000" cluster
	I0920 10:56:23.849975    9639 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:56:23.849990    9639 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:56:23.850000    9639 cache.go:56] Caching tarball of preloaded images
	I0920 10:56:23.850064    9639 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:56:23.850069    9639 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:56:23.850118    9639 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/custom-flannel-064000/config.json ...
	I0920 10:56:23.850128    9639 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/custom-flannel-064000/config.json: {Name:mkf92e9ff58acea9f22414144972e39a6f405a86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:56:23.850328    9639 start.go:360] acquireMachinesLock for custom-flannel-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:56:23.850366    9639 start.go:364] duration metric: took 28.291µs to acquireMachinesLock for "custom-flannel-064000"
	I0920 10:56:23.850377    9639 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:56:23.850401    9639 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:56:23.854128    9639 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:56:23.869362    9639 start.go:159] libmachine.API.Create for "custom-flannel-064000" (driver="qemu2")
	I0920 10:56:23.869392    9639 client.go:168] LocalClient.Create starting
	I0920 10:56:23.869445    9639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:56:23.869493    9639 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:23.869502    9639 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:23.869527    9639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:56:23.869553    9639 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:23.869561    9639 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:23.869922    9639 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:56:24.033886    9639 main.go:141] libmachine: Creating SSH key...
	I0920 10:56:24.113270    9639 main.go:141] libmachine: Creating Disk image...
	I0920 10:56:24.113277    9639 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:56:24.113492    9639 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2
	I0920 10:56:24.122944    9639 main.go:141] libmachine: STDOUT: 
	I0920 10:56:24.122961    9639 main.go:141] libmachine: STDERR: 
	I0920 10:56:24.123028    9639 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2 +20000M
	I0920 10:56:24.131175    9639 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:56:24.131190    9639 main.go:141] libmachine: STDERR: 
	I0920 10:56:24.131222    9639 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2
	I0920 10:56:24.131228    9639 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:56:24.131240    9639 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:56:24.131266    9639 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:9d:33:11:83:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2
	I0920 10:56:24.132908    9639 main.go:141] libmachine: STDOUT: 
	I0920 10:56:24.132923    9639 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:56:24.132943    9639 client.go:171] duration metric: took 263.548167ms to LocalClient.Create
	I0920 10:56:26.135108    9639 start.go:128] duration metric: took 2.284701375s to createHost
	I0920 10:56:26.135159    9639 start.go:83] releasing machines lock for "custom-flannel-064000", held for 2.284799625s
	W0920 10:56:26.135248    9639 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:26.140141    9639 out.go:177] * Deleting "custom-flannel-064000" in qemu2 ...
	W0920 10:56:26.167695    9639 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:26.167718    9639 start.go:729] Will try again in 5 seconds ...
	I0920 10:56:31.168681    9639 start.go:360] acquireMachinesLock for custom-flannel-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:56:31.168788    9639 start.go:364] duration metric: took 85.666µs to acquireMachinesLock for "custom-flannel-064000"
	I0920 10:56:31.168800    9639 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:56:31.168844    9639 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:56:31.178063    9639 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:56:31.194714    9639 start.go:159] libmachine.API.Create for "custom-flannel-064000" (driver="qemu2")
	I0920 10:56:31.194748    9639 client.go:168] LocalClient.Create starting
	I0920 10:56:31.194820    9639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:56:31.194855    9639 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:31.194863    9639 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:31.194903    9639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:56:31.194926    9639 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:31.194932    9639 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:31.195231    9639 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:56:31.358166    9639 main.go:141] libmachine: Creating SSH key...
	I0920 10:56:31.419185    9639 main.go:141] libmachine: Creating Disk image...
	I0920 10:56:31.419196    9639 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:56:31.419436    9639 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2
	I0920 10:56:31.429887    9639 main.go:141] libmachine: STDOUT: 
	I0920 10:56:31.429915    9639 main.go:141] libmachine: STDERR: 
	I0920 10:56:31.430006    9639 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2 +20000M
	I0920 10:56:31.439122    9639 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:56:31.439161    9639 main.go:141] libmachine: STDERR: 
	I0920 10:56:31.439176    9639 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2
	I0920 10:56:31.439186    9639 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:56:31.439200    9639 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:56:31.439240    9639 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:30:be:2f:ea:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/custom-flannel-064000/disk.qcow2
	I0920 10:56:31.441275    9639 main.go:141] libmachine: STDOUT: 
	I0920 10:56:31.441293    9639 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:56:31.441308    9639 client.go:171] duration metric: took 246.556417ms to LocalClient.Create
	I0920 10:56:33.443592    9639 start.go:128] duration metric: took 2.274719875s to createHost
	I0920 10:56:33.443695    9639 start.go:83] releasing machines lock for "custom-flannel-064000", held for 2.274911292s
	W0920 10:56:33.444057    9639 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:33.456508    9639 out.go:201] 
	W0920 10:56:33.460652    9639 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:56:33.460680    9639 out.go:270] * 
	* 
	W0920 10:56:33.463452    9639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:56:33.473607    9639 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.858076625s)

                                                
                                                
-- stdout --
	* [false-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-064000" primary control-plane node in "false-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:56:35.913463    9756 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:56:35.913595    9756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:56:35.913598    9756 out.go:358] Setting ErrFile to fd 2...
	I0920 10:56:35.913601    9756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:56:35.913734    9756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:56:35.914822    9756 out.go:352] Setting JSON to false
	I0920 10:56:35.931357    9756 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6966,"bootTime":1726848029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:56:35.931444    9756 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:56:35.939176    9756 out.go:177] * [false-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:56:35.946979    9756 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:56:35.947041    9756 notify.go:220] Checking for updates...
	I0920 10:56:35.954965    9756 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:56:35.957978    9756 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:56:35.961929    9756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:56:35.964992    9756 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:56:35.967979    9756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:56:35.971384    9756 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:56:35.971449    9756 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:56:35.971502    9756 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:56:35.975018    9756 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:56:35.981930    9756 start.go:297] selected driver: qemu2
	I0920 10:56:35.981936    9756 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:56:35.981942    9756 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:56:35.984071    9756 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:56:35.987968    9756 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:56:35.991025    9756 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:56:35.991051    9756 cni.go:84] Creating CNI manager for "false"
	I0920 10:56:35.991091    9756 start.go:340] cluster config:
	{Name:false-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:56:35.994712    9756 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:56:36.002943    9756 out.go:177] * Starting "false-064000" primary control-plane node in "false-064000" cluster
	I0920 10:56:36.006966    9756 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:56:36.006992    9756 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:56:36.006998    9756 cache.go:56] Caching tarball of preloaded images
	I0920 10:56:36.007056    9756 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:56:36.007061    9756 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:56:36.007123    9756 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/false-064000/config.json ...
	I0920 10:56:36.007135    9756 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/false-064000/config.json: {Name:mke20a15e75b1ad1315a6ea0fb878263bd8e9686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:56:36.007363    9756 start.go:360] acquireMachinesLock for false-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:56:36.007397    9756 start.go:364] duration metric: took 27.708µs to acquireMachinesLock for "false-064000"
	I0920 10:56:36.007410    9756 start.go:93] Provisioning new machine with config: &{Name:false-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:56:36.007437    9756 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:56:36.014921    9756 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:56:36.031946    9756 start.go:159] libmachine.API.Create for "false-064000" (driver="qemu2")
	I0920 10:56:36.031979    9756 client.go:168] LocalClient.Create starting
	I0920 10:56:36.032049    9756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:56:36.032079    9756 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:36.032088    9756 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:36.032131    9756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:56:36.032159    9756 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:36.032168    9756 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:36.032518    9756 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:56:36.196981    9756 main.go:141] libmachine: Creating SSH key...
	I0920 10:56:36.280507    9756 main.go:141] libmachine: Creating Disk image...
	I0920 10:56:36.280516    9756 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:56:36.280725    9756 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2
	I0920 10:56:36.289716    9756 main.go:141] libmachine: STDOUT: 
	I0920 10:56:36.289732    9756 main.go:141] libmachine: STDERR: 
	I0920 10:56:36.289797    9756 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2 +20000M
	I0920 10:56:36.297682    9756 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:56:36.297711    9756 main.go:141] libmachine: STDERR: 
	I0920 10:56:36.297732    9756 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2
	I0920 10:56:36.297738    9756 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:56:36.297748    9756 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:56:36.297772    9756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:33:3b:13:e3:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2
	I0920 10:56:36.299394    9756 main.go:141] libmachine: STDOUT: 
	I0920 10:56:36.299416    9756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:56:36.299435    9756 client.go:171] duration metric: took 267.45075ms to LocalClient.Create
	I0920 10:56:38.301671    9756 start.go:128] duration metric: took 2.294219375s to createHost
	I0920 10:56:38.301787    9756 start.go:83] releasing machines lock for "false-064000", held for 2.2943855s
	W0920 10:56:38.301886    9756 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:38.319224    9756 out.go:177] * Deleting "false-064000" in qemu2 ...
	W0920 10:56:38.348441    9756 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:38.348466    9756 start.go:729] Will try again in 5 seconds ...
	I0920 10:56:43.350022    9756 start.go:360] acquireMachinesLock for false-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:56:43.350563    9756 start.go:364] duration metric: took 458.958µs to acquireMachinesLock for "false-064000"
	I0920 10:56:43.350700    9756 start.go:93] Provisioning new machine with config: &{Name:false-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:56:43.351045    9756 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:56:43.357782    9756 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:56:43.407937    9756 start.go:159] libmachine.API.Create for "false-064000" (driver="qemu2")
	I0920 10:56:43.407999    9756 client.go:168] LocalClient.Create starting
	I0920 10:56:43.408140    9756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:56:43.408202    9756 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:43.408218    9756 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:43.408284    9756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:56:43.408330    9756 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:43.408346    9756 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:43.409033    9756 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:56:43.580633    9756 main.go:141] libmachine: Creating SSH key...
	I0920 10:56:43.678543    9756 main.go:141] libmachine: Creating Disk image...
	I0920 10:56:43.678550    9756 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:56:43.678759    9756 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2
	I0920 10:56:43.688281    9756 main.go:141] libmachine: STDOUT: 
	I0920 10:56:43.688313    9756 main.go:141] libmachine: STDERR: 
	I0920 10:56:43.688368    9756 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2 +20000M
	I0920 10:56:43.696202    9756 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:56:43.696232    9756 main.go:141] libmachine: STDERR: 
	I0920 10:56:43.696243    9756 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2
	I0920 10:56:43.696250    9756 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:56:43.696258    9756 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:56:43.696286    9756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e6:1d:e9:1c:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/false-064000/disk.qcow2
	I0920 10:56:43.697947    9756 main.go:141] libmachine: STDOUT: 
	I0920 10:56:43.697966    9756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:56:43.697978    9756 client.go:171] duration metric: took 289.975ms to LocalClient.Create
	I0920 10:56:45.700149    9756 start.go:128] duration metric: took 2.349088083s to createHost
	I0920 10:56:45.700237    9756 start.go:83] releasing machines lock for "false-064000", held for 2.349655917s
	W0920 10:56:45.700542    9756 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:45.717109    9756 out.go:201] 
	W0920 10:56:45.722288    9756 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:56:45.722340    9756 out.go:270] * 
	* 
	W0920 10:56:45.724810    9756 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:56:45.732250    9756 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.910635916s)

                                                
                                                
-- stdout --
	* [enable-default-cni-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-064000" primary control-plane node in "enable-default-cni-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:56:47.925885    9865 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:56:47.926037    9865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:56:47.926041    9865 out.go:358] Setting ErrFile to fd 2...
	I0920 10:56:47.926043    9865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:56:47.926187    9865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:56:47.927304    9865 out.go:352] Setting JSON to false
	I0920 10:56:47.943694    9865 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6978,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:56:47.943767    9865 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:56:47.951211    9865 out.go:177] * [enable-default-cni-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:56:47.959017    9865 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:56:47.959104    9865 notify.go:220] Checking for updates...
	I0920 10:56:47.965999    9865 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:56:47.972071    9865 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:56:47.975036    9865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:56:47.978036    9865 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:56:47.981067    9865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:56:47.984347    9865 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:56:47.984421    9865 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:56:47.984463    9865 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:56:47.988077    9865 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:56:47.994965    9865 start.go:297] selected driver: qemu2
	I0920 10:56:47.994972    9865 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:56:47.994977    9865 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:56:47.997081    9865 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:56:48.001044    9865 out.go:177] * Automatically selected the socket_vmnet network
	E0920 10:56:48.004169    9865 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0920 10:56:48.004180    9865 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:56:48.004198    9865 cni.go:84] Creating CNI manager for "bridge"
	I0920 10:56:48.004211    9865 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:56:48.004248    9865 start.go:340] cluster config:
	{Name:enable-default-cni-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:56:48.007585    9865 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:56:48.016131    9865 out.go:177] * Starting "enable-default-cni-064000" primary control-plane node in "enable-default-cni-064000" cluster
	I0920 10:56:48.018951    9865 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:56:48.018964    9865 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:56:48.018972    9865 cache.go:56] Caching tarball of preloaded images
	I0920 10:56:48.019029    9865 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:56:48.019034    9865 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:56:48.019092    9865 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/enable-default-cni-064000/config.json ...
	I0920 10:56:48.019103    9865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/enable-default-cni-064000/config.json: {Name:mkb7b329cf84fbf739d980ec0a793f48d986b166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:56:48.019314    9865 start.go:360] acquireMachinesLock for enable-default-cni-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:56:48.019351    9865 start.go:364] duration metric: took 29.209µs to acquireMachinesLock for "enable-default-cni-064000"
	I0920 10:56:48.019363    9865 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:56:48.019386    9865 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:56:48.026972    9865 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:56:48.042335    9865 start.go:159] libmachine.API.Create for "enable-default-cni-064000" (driver="qemu2")
	I0920 10:56:48.042366    9865 client.go:168] LocalClient.Create starting
	I0920 10:56:48.042437    9865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:56:48.042473    9865 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:48.042481    9865 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:48.042525    9865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:56:48.042547    9865 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:48.042554    9865 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:48.042949    9865 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:56:48.207189    9865 main.go:141] libmachine: Creating SSH key...
	I0920 10:56:48.330108    9865 main.go:141] libmachine: Creating Disk image...
	I0920 10:56:48.330120    9865 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:56:48.330341    9865 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2
	I0920 10:56:48.339983    9865 main.go:141] libmachine: STDOUT: 
	I0920 10:56:48.340002    9865 main.go:141] libmachine: STDERR: 
	I0920 10:56:48.340059    9865 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2 +20000M
	I0920 10:56:48.348255    9865 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:56:48.348271    9865 main.go:141] libmachine: STDERR: 
	I0920 10:56:48.348287    9865 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2
	I0920 10:56:48.348295    9865 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:56:48.348314    9865 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:56:48.348340    9865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:1c:81:14:9b:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2
	I0920 10:56:48.350005    9865 main.go:141] libmachine: STDOUT: 
	I0920 10:56:48.350020    9865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:56:48.350039    9865 client.go:171] duration metric: took 307.668583ms to LocalClient.Create
	I0920 10:56:50.352242    9865 start.go:128] duration metric: took 2.332846416s to createHost
	I0920 10:56:50.352326    9865 start.go:83] releasing machines lock for "enable-default-cni-064000", held for 2.332983042s
	W0920 10:56:50.352403    9865 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:50.365005    9865 out.go:177] * Deleting "enable-default-cni-064000" in qemu2 ...
	W0920 10:56:50.393687    9865 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:50.393709    9865 start.go:729] Will try again in 5 seconds ...
	I0920 10:56:55.395836    9865 start.go:360] acquireMachinesLock for enable-default-cni-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:56:55.396285    9865 start.go:364] duration metric: took 372.25µs to acquireMachinesLock for "enable-default-cni-064000"
	I0920 10:56:55.396340    9865 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:56:55.396593    9865 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:56:55.406961    9865 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:56:55.450898    9865 start.go:159] libmachine.API.Create for "enable-default-cni-064000" (driver="qemu2")
	I0920 10:56:55.450949    9865 client.go:168] LocalClient.Create starting
	I0920 10:56:55.451090    9865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:56:55.451162    9865 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:55.451176    9865 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:55.451250    9865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:56:55.451295    9865 main.go:141] libmachine: Decoding PEM data...
	I0920 10:56:55.451315    9865 main.go:141] libmachine: Parsing certificate...
	I0920 10:56:55.451957    9865 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:56:55.620652    9865 main.go:141] libmachine: Creating SSH key...
	I0920 10:56:55.752710    9865 main.go:141] libmachine: Creating Disk image...
	I0920 10:56:55.752718    9865 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:56:55.752938    9865 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2
	I0920 10:56:55.763182    9865 main.go:141] libmachine: STDOUT: 
	I0920 10:56:55.763207    9865 main.go:141] libmachine: STDERR: 
	I0920 10:56:55.763288    9865 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2 +20000M
	I0920 10:56:55.771544    9865 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:56:55.771560    9865 main.go:141] libmachine: STDERR: 
	I0920 10:56:55.771572    9865 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2
	I0920 10:56:55.771578    9865 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:56:55.771585    9865 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:56:55.771633    9865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:34:d5:32:e4:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/enable-default-cni-064000/disk.qcow2
	I0920 10:56:55.773259    9865 main.go:141] libmachine: STDOUT: 
	I0920 10:56:55.773281    9865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:56:55.773296    9865 client.go:171] duration metric: took 322.342542ms to LocalClient.Create
	I0920 10:56:57.775406    9865 start.go:128] duration metric: took 2.3788035s to createHost
	I0920 10:56:57.775473    9865 start.go:83] releasing machines lock for "enable-default-cni-064000", held for 2.379186458s
	W0920 10:56:57.775784    9865 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:56:57.784278    9865 out.go:201] 
	W0920 10:56:57.790327    9865 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:56:57.790340    9865 out.go:270] * 
	* 
	W0920 10:56:57.792032    9865 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:56:57.798973    9865 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.811185333s)

                                                
                                                
-- stdout --
	* [flannel-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-064000" primary control-plane node in "flannel-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:56:59.943912    9974 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:56:59.944054    9974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:56:59.944057    9974 out.go:358] Setting ErrFile to fd 2...
	I0920 10:56:59.944059    9974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:56:59.944200    9974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:56:59.945322    9974 out.go:352] Setting JSON to false
	I0920 10:56:59.961583    9974 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6990,"bootTime":1726848029,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:56:59.961647    9974 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:56:59.968651    9974 out.go:177] * [flannel-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:56:59.976579    9974 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:56:59.976620    9974 notify.go:220] Checking for updates...
	I0920 10:56:59.984690    9974 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:56:59.987624    9974 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:56:59.990694    9974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:56:59.993677    9974 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:56:59.996657    9974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:57:00.000000    9974 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:57:00.000065    9974 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:57:00.000110    9974 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:57:00.003653    9974 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:57:00.010612    9974 start.go:297] selected driver: qemu2
	I0920 10:57:00.010618    9974 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:57:00.010623    9974 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:57:00.012834    9974 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:57:00.016684    9974 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:57:00.019638    9974 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:57:00.019655    9974 cni.go:84] Creating CNI manager for "flannel"
	I0920 10:57:00.019658    9974 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0920 10:57:00.019688    9974 start.go:340] cluster config:
	{Name:flannel-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:57:00.023290    9974 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:00.031487    9974 out.go:177] * Starting "flannel-064000" primary control-plane node in "flannel-064000" cluster
	I0920 10:57:00.035632    9974 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:57:00.035648    9974 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:57:00.035662    9974 cache.go:56] Caching tarball of preloaded images
	I0920 10:57:00.035728    9974 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:57:00.035734    9974 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:57:00.035797    9974 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/flannel-064000/config.json ...
	I0920 10:57:00.035809    9974 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/flannel-064000/config.json: {Name:mkaeaa374f75ac15e51cad13970f8b535122bb01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:57:00.036031    9974 start.go:360] acquireMachinesLock for flannel-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:00.036064    9974 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "flannel-064000"
	I0920 10:57:00.036077    9974 start.go:93] Provisioning new machine with config: &{Name:flannel-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:57:00.036100    9974 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:57:00.043643    9974 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:57:00.060014    9974 start.go:159] libmachine.API.Create for "flannel-064000" (driver="qemu2")
	I0920 10:57:00.060050    9974 client.go:168] LocalClient.Create starting
	I0920 10:57:00.060122    9974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:57:00.060151    9974 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:00.060161    9974 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:00.060201    9974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:57:00.060224    9974 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:00.060232    9974 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:00.060586    9974 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:57:00.223834    9974 main.go:141] libmachine: Creating SSH key...
	I0920 10:57:00.264265    9974 main.go:141] libmachine: Creating Disk image...
	I0920 10:57:00.264271    9974 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:57:00.264489    9974 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2
	I0920 10:57:00.273645    9974 main.go:141] libmachine: STDOUT: 
	I0920 10:57:00.273662    9974 main.go:141] libmachine: STDERR: 
	I0920 10:57:00.273715    9974 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2 +20000M
	I0920 10:57:00.281541    9974 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:57:00.281557    9974 main.go:141] libmachine: STDERR: 
	I0920 10:57:00.281580    9974 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2
	I0920 10:57:00.281586    9974 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:57:00.281596    9974 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:00.281624    9974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:3d:e7:cb:20:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2
	I0920 10:57:00.283260    9974 main.go:141] libmachine: STDOUT: 
	I0920 10:57:00.283273    9974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:00.283292    9974 client.go:171] duration metric: took 223.237416ms to LocalClient.Create
	I0920 10:57:02.283969    9974 start.go:128] duration metric: took 2.24786s to createHost
	I0920 10:57:02.284024    9974 start.go:83] releasing machines lock for "flannel-064000", held for 2.247966958s
	W0920 10:57:02.284092    9974 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:02.294368    9974 out.go:177] * Deleting "flannel-064000" in qemu2 ...
	W0920 10:57:02.325071    9974 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:02.325101    9974 start.go:729] Will try again in 5 seconds ...
	I0920 10:57:07.327313    9974 start.go:360] acquireMachinesLock for flannel-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:07.327954    9974 start.go:364] duration metric: took 521.5µs to acquireMachinesLock for "flannel-064000"
	I0920 10:57:07.328111    9974 start.go:93] Provisioning new machine with config: &{Name:flannel-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:57:07.328361    9974 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:57:07.336966    9974 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:57:07.381104    9974 start.go:159] libmachine.API.Create for "flannel-064000" (driver="qemu2")
	I0920 10:57:07.381162    9974 client.go:168] LocalClient.Create starting
	I0920 10:57:07.381293    9974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:57:07.381355    9974 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:07.381370    9974 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:07.381434    9974 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:57:07.381473    9974 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:07.381493    9974 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:07.382040    9974 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:57:07.550863    9974 main.go:141] libmachine: Creating SSH key...
	I0920 10:57:07.658693    9974 main.go:141] libmachine: Creating Disk image...
	I0920 10:57:07.658702    9974 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:57:07.658920    9974 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2
	I0920 10:57:07.668651    9974 main.go:141] libmachine: STDOUT: 
	I0920 10:57:07.668678    9974 main.go:141] libmachine: STDERR: 
	I0920 10:57:07.668737    9974 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2 +20000M
	I0920 10:57:07.676725    9974 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:57:07.676744    9974 main.go:141] libmachine: STDERR: 
	I0920 10:57:07.676753    9974 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2
	I0920 10:57:07.676757    9974 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:57:07.676766    9974 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:07.676823    9974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:46:f3:74:f6:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/flannel-064000/disk.qcow2
	I0920 10:57:07.678479    9974 main.go:141] libmachine: STDOUT: 
	I0920 10:57:07.678492    9974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:07.678505    9974 client.go:171] duration metric: took 297.339416ms to LocalClient.Create
	I0920 10:57:09.680682    9974 start.go:128] duration metric: took 2.352303583s to createHost
	I0920 10:57:09.680745    9974 start.go:83] releasing machines lock for "flannel-064000", held for 2.352766209s
	W0920 10:57:09.681110    9974 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:09.694791    9974 out.go:201] 
	W0920 10:57:09.697920    9974 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:57:09.697950    9974 out.go:270] * 
	* 
	W0920 10:57:09.700802    9974 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:57:09.712812    9974 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.821399209s)

                                                
                                                
-- stdout --
	* [bridge-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-064000" primary control-plane node in "bridge-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:57:12.096067   10092 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:57:12.096200   10092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:12.096204   10092 out.go:358] Setting ErrFile to fd 2...
	I0920 10:57:12.096206   10092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:12.096337   10092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:57:12.097470   10092 out.go:352] Setting JSON to false
	I0920 10:57:12.113964   10092 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7003,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:57:12.114039   10092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:57:12.120545   10092 out.go:177] * [bridge-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:57:12.128444   10092 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:57:12.128544   10092 notify.go:220] Checking for updates...
	I0920 10:57:12.135418   10092 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:57:12.138494   10092 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:57:12.142445   10092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:57:12.145459   10092 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:57:12.148451   10092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:57:12.151764   10092 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:57:12.151824   10092 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:57:12.151873   10092 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:57:12.156413   10092 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:57:12.167420   10092 start.go:297] selected driver: qemu2
	I0920 10:57:12.167427   10092 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:57:12.167433   10092 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:57:12.169554   10092 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:57:12.173423   10092 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:57:12.174978   10092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:57:12.175015   10092 cni.go:84] Creating CNI manager for "bridge"
	I0920 10:57:12.175023   10092 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:57:12.175054   10092 start.go:340] cluster config:
	{Name:bridge-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:57:12.178818   10092 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:12.187420   10092 out.go:177] * Starting "bridge-064000" primary control-plane node in "bridge-064000" cluster
	I0920 10:57:12.191373   10092 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:57:12.191387   10092 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:57:12.191396   10092 cache.go:56] Caching tarball of preloaded images
	I0920 10:57:12.191458   10092 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:57:12.191464   10092 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:57:12.191518   10092 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/bridge-064000/config.json ...
	I0920 10:57:12.191528   10092 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/bridge-064000/config.json: {Name:mkca031b67c3de1386155ac9f9e787fbdd099ebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:57:12.191878   10092 start.go:360] acquireMachinesLock for bridge-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:12.191917   10092 start.go:364] duration metric: took 32.667µs to acquireMachinesLock for "bridge-064000"
	I0920 10:57:12.191930   10092 start.go:93] Provisioning new machine with config: &{Name:bridge-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:57:12.191957   10092 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:57:12.196514   10092 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:57:12.213506   10092 start.go:159] libmachine.API.Create for "bridge-064000" (driver="qemu2")
	I0920 10:57:12.213536   10092 client.go:168] LocalClient.Create starting
	I0920 10:57:12.213610   10092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:57:12.213653   10092 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:12.213661   10092 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:12.213707   10092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:57:12.213734   10092 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:12.213749   10092 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:12.214137   10092 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:57:12.377103   10092 main.go:141] libmachine: Creating SSH key...
	I0920 10:57:12.452071   10092 main.go:141] libmachine: Creating Disk image...
	I0920 10:57:12.452080   10092 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:57:12.452306   10092 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2
	I0920 10:57:12.461515   10092 main.go:141] libmachine: STDOUT: 
	I0920 10:57:12.461531   10092 main.go:141] libmachine: STDERR: 
	I0920 10:57:12.461602   10092 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2 +20000M
	I0920 10:57:12.469458   10092 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:57:12.469479   10092 main.go:141] libmachine: STDERR: 
	I0920 10:57:12.469498   10092 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2
	I0920 10:57:12.469504   10092 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:57:12.469514   10092 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:12.469542   10092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:23:61:5b:28:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2
	I0920 10:57:12.471143   10092 main.go:141] libmachine: STDOUT: 
	I0920 10:57:12.471176   10092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:12.471201   10092 client.go:171] duration metric: took 257.658958ms to LocalClient.Create
	I0920 10:57:14.473391   10092 start.go:128] duration metric: took 2.281420916s to createHost
	I0920 10:57:14.473463   10092 start.go:83] releasing machines lock for "bridge-064000", held for 2.28154925s
	W0920 10:57:14.473550   10092 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:14.490760   10092 out.go:177] * Deleting "bridge-064000" in qemu2 ...
	W0920 10:57:14.519023   10092 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:14.519049   10092 start.go:729] Will try again in 5 seconds ...
	I0920 10:57:19.521302   10092 start.go:360] acquireMachinesLock for bridge-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:19.521850   10092 start.go:364] duration metric: took 431.958µs to acquireMachinesLock for "bridge-064000"
	I0920 10:57:19.521923   10092 start.go:93] Provisioning new machine with config: &{Name:bridge-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:57:19.522257   10092 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:57:19.533012   10092 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:57:19.579195   10092 start.go:159] libmachine.API.Create for "bridge-064000" (driver="qemu2")
	I0920 10:57:19.579252   10092 client.go:168] LocalClient.Create starting
	I0920 10:57:19.579387   10092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:57:19.579455   10092 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:19.579474   10092 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:19.579535   10092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:57:19.579580   10092 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:19.579592   10092 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:19.580406   10092 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:57:19.748649   10092 main.go:141] libmachine: Creating SSH key...
	I0920 10:57:19.823834   10092 main.go:141] libmachine: Creating Disk image...
	I0920 10:57:19.823841   10092 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:57:19.824049   10092 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2
	I0920 10:57:19.834049   10092 main.go:141] libmachine: STDOUT: 
	I0920 10:57:19.834073   10092 main.go:141] libmachine: STDERR: 
	I0920 10:57:19.834135   10092 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2 +20000M
	I0920 10:57:19.842324   10092 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:57:19.842342   10092 main.go:141] libmachine: STDERR: 
	I0920 10:57:19.842354   10092 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2
	I0920 10:57:19.842360   10092 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:57:19.842370   10092 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:19.842400   10092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:f1:2a:ec:75:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/bridge-064000/disk.qcow2
	I0920 10:57:19.844100   10092 main.go:141] libmachine: STDOUT: 
	I0920 10:57:19.844122   10092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:19.844144   10092 client.go:171] duration metric: took 264.886792ms to LocalClient.Create
	I0920 10:57:21.846300   10092 start.go:128] duration metric: took 2.323991917s to createHost
	I0920 10:57:21.846366   10092 start.go:83] releasing machines lock for "bridge-064000", held for 2.324508833s
	W0920 10:57:21.846656   10092 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:21.856234   10092 out.go:201] 
	W0920 10:57:21.862352   10092 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:57:21.862388   10092 out.go:270] * 
	* 
	W0920 10:57:21.864614   10092 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:57:21.879274   10092 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-064000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.722822708s)

                                                
                                                
-- stdout --
	* [kubenet-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-064000" primary control-plane node in "kubenet-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:57:24.113462   10201 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:57:24.113591   10201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:24.113593   10201 out.go:358] Setting ErrFile to fd 2...
	I0920 10:57:24.113596   10201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:24.113737   10201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:57:24.114816   10201 out.go:352] Setting JSON to false
	I0920 10:57:24.131563   10201 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7015,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:57:24.131644   10201 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:57:24.137231   10201 out.go:177] * [kubenet-064000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:57:24.145437   10201 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:57:24.145513   10201 notify.go:220] Checking for updates...
	I0920 10:57:24.154342   10201 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:57:24.157451   10201 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:57:24.160291   10201 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:57:24.163370   10201 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:57:24.166354   10201 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:57:24.169589   10201 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:57:24.169659   10201 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:57:24.169707   10201 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:57:24.173318   10201 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:57:24.180337   10201 start.go:297] selected driver: qemu2
	I0920 10:57:24.180345   10201 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:57:24.180352   10201 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:57:24.182412   10201 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:57:24.186300   10201 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:57:24.189467   10201 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:57:24.189485   10201 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0920 10:57:24.189515   10201 start.go:340] cluster config:
	{Name:kubenet-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:57:24.193010   10201 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:24.201340   10201 out.go:177] * Starting "kubenet-064000" primary control-plane node in "kubenet-064000" cluster
	I0920 10:57:24.204237   10201 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:57:24.204257   10201 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:57:24.204264   10201 cache.go:56] Caching tarball of preloaded images
	I0920 10:57:24.204347   10201 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:57:24.204352   10201 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:57:24.204419   10201 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/kubenet-064000/config.json ...
	I0920 10:57:24.204429   10201 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/kubenet-064000/config.json: {Name:mkbe6d5837e393e4397e1035c9e673da47589892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:57:24.204780   10201 start.go:360] acquireMachinesLock for kubenet-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:24.204817   10201 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "kubenet-064000"
	I0920 10:57:24.204829   10201 start.go:93] Provisioning new machine with config: &{Name:kubenet-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:57:24.204863   10201 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:57:24.212333   10201 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:57:24.227993   10201 start.go:159] libmachine.API.Create for "kubenet-064000" (driver="qemu2")
	I0920 10:57:24.228028   10201 client.go:168] LocalClient.Create starting
	I0920 10:57:24.228081   10201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:57:24.228110   10201 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:24.228119   10201 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:24.228160   10201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:57:24.228189   10201 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:24.228196   10201 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:24.228535   10201 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:57:24.391432   10201 main.go:141] libmachine: Creating SSH key...
	I0920 10:57:24.450012   10201 main.go:141] libmachine: Creating Disk image...
	I0920 10:57:24.450019   10201 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:57:24.450243   10201 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2
	I0920 10:57:24.459622   10201 main.go:141] libmachine: STDOUT: 
	I0920 10:57:24.459638   10201 main.go:141] libmachine: STDERR: 
	I0920 10:57:24.459714   10201 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2 +20000M
	I0920 10:57:24.467585   10201 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:57:24.467599   10201 main.go:141] libmachine: STDERR: 
	I0920 10:57:24.467614   10201 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2
	I0920 10:57:24.467620   10201 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:57:24.467632   10201 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:24.467679   10201 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:8e:70:38:c6:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2
	I0920 10:57:24.469297   10201 main.go:141] libmachine: STDOUT: 
	I0920 10:57:24.469313   10201 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:24.469335   10201 client.go:171] duration metric: took 241.298875ms to LocalClient.Create
	I0920 10:57:26.471375   10201 start.go:128] duration metric: took 2.266521791s to createHost
	I0920 10:57:26.471395   10201 start.go:83] releasing machines lock for "kubenet-064000", held for 2.266587458s
	W0920 10:57:26.471412   10201 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:26.475918   10201 out.go:177] * Deleting "kubenet-064000" in qemu2 ...
	W0920 10:57:26.487252   10201 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:26.487260   10201 start.go:729] Will try again in 5 seconds ...
	I0920 10:57:31.489341   10201 start.go:360] acquireMachinesLock for kubenet-064000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:31.489551   10201 start.go:364] duration metric: took 157.167µs to acquireMachinesLock for "kubenet-064000"
	I0920 10:57:31.489602   10201 start.go:93] Provisioning new machine with config: &{Name:kubenet-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:57:31.489673   10201 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:57:31.500979   10201 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 10:57:31.531439   10201 start.go:159] libmachine.API.Create for "kubenet-064000" (driver="qemu2")
	I0920 10:57:31.531482   10201 client.go:168] LocalClient.Create starting
	I0920 10:57:31.531597   10201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:57:31.531654   10201 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:31.531679   10201 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:31.531728   10201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:57:31.531765   10201 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:31.531774   10201 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:31.532213   10201 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:57:31.695279   10201 main.go:141] libmachine: Creating SSH key...
	I0920 10:57:31.746002   10201 main.go:141] libmachine: Creating Disk image...
	I0920 10:57:31.746013   10201 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:57:31.746215   10201 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2
	I0920 10:57:31.755586   10201 main.go:141] libmachine: STDOUT: 
	I0920 10:57:31.755605   10201 main.go:141] libmachine: STDERR: 
	I0920 10:57:31.755660   10201 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2 +20000M
	I0920 10:57:31.764032   10201 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:57:31.764062   10201 main.go:141] libmachine: STDERR: 
	I0920 10:57:31.764086   10201 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2
	I0920 10:57:31.764092   10201 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:57:31.764099   10201 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:31.764139   10201 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:b4:ad:f2:9c:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/kubenet-064000/disk.qcow2
	I0920 10:57:31.765844   10201 main.go:141] libmachine: STDOUT: 
	I0920 10:57:31.765858   10201 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:31.765871   10201 client.go:171] duration metric: took 234.385041ms to LocalClient.Create
	I0920 10:57:33.767961   10201 start.go:128] duration metric: took 2.278287083s to createHost
	I0920 10:57:33.768024   10201 start.go:83] releasing machines lock for "kubenet-064000", held for 2.278462875s
	W0920 10:57:33.768196   10201 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:33.778561   10201 out.go:201] 
	W0920 10:57:33.783602   10201 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:57:33.783617   10201 out.go:270] * 
	* 
	W0920 10:57:33.784742   10201 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:57:33.797591   10201 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-705000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-705000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.899117042s)

                                                
                                                
-- stdout --
	* [old-k8s-version-705000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-705000" primary control-plane node in "old-k8s-version-705000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-705000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:57:36.029700   10315 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:57:36.029827   10315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:36.029830   10315 out.go:358] Setting ErrFile to fd 2...
	I0920 10:57:36.029833   10315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:36.029980   10315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:57:36.031053   10315 out.go:352] Setting JSON to false
	I0920 10:57:36.047389   10315 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7027,"bootTime":1726848029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:57:36.047468   10315 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:57:36.052977   10315 out.go:177] * [old-k8s-version-705000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:57:36.061717   10315 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:57:36.061780   10315 notify.go:220] Checking for updates...
	I0920 10:57:36.069663   10315 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:57:36.072744   10315 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:57:36.075764   10315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:57:36.077211   10315 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:57:36.080696   10315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:57:36.084102   10315 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:57:36.084178   10315 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:57:36.084223   10315 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:57:36.085962   10315 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:57:36.092709   10315 start.go:297] selected driver: qemu2
	I0920 10:57:36.092715   10315 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:57:36.092721   10315 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:57:36.095026   10315 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:57:36.098527   10315 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:57:36.101759   10315 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:57:36.101776   10315 cni.go:84] Creating CNI manager for ""
	I0920 10:57:36.101798   10315 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:57:36.101829   10315 start.go:340] cluster config:
	{Name:old-k8s-version-705000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:57:36.105616   10315 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:36.113696   10315 out.go:177] * Starting "old-k8s-version-705000" primary control-plane node in "old-k8s-version-705000" cluster
	I0920 10:57:36.117642   10315 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:57:36.117657   10315 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:57:36.117664   10315 cache.go:56] Caching tarball of preloaded images
	I0920 10:57:36.117718   10315 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:57:36.117724   10315 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:57:36.117780   10315 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/old-k8s-version-705000/config.json ...
	I0920 10:57:36.117792   10315 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/old-k8s-version-705000/config.json: {Name:mkde65f75043f78e67dd9c184663e64f4a0a3530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:57:36.118071   10315 start.go:360] acquireMachinesLock for old-k8s-version-705000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:36.118104   10315 start.go:364] duration metric: took 27µs to acquireMachinesLock for "old-k8s-version-705000"
	I0920 10:57:36.118116   10315 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-705000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:57:36.118144   10315 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:57:36.126742   10315 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:57:36.142537   10315 start.go:159] libmachine.API.Create for "old-k8s-version-705000" (driver="qemu2")
	I0920 10:57:36.142566   10315 client.go:168] LocalClient.Create starting
	I0920 10:57:36.142631   10315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:57:36.142661   10315 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:36.142670   10315 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:36.142708   10315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:57:36.142733   10315 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:36.142741   10315 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:36.143092   10315 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:57:36.307313   10315 main.go:141] libmachine: Creating SSH key...
	I0920 10:57:36.475834   10315 main.go:141] libmachine: Creating Disk image...
	I0920 10:57:36.475843   10315 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:57:36.476065   10315 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2
	I0920 10:57:36.485232   10315 main.go:141] libmachine: STDOUT: 
	I0920 10:57:36.485249   10315 main.go:141] libmachine: STDERR: 
	I0920 10:57:36.485320   10315 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2 +20000M
	I0920 10:57:36.493210   10315 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:57:36.493234   10315 main.go:141] libmachine: STDERR: 
	I0920 10:57:36.493251   10315 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2
	I0920 10:57:36.493256   10315 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:57:36.493267   10315 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:36.493302   10315 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ad:e1:23:0f:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2
	I0920 10:57:36.495042   10315 main.go:141] libmachine: STDOUT: 
	I0920 10:57:36.495063   10315 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:36.495080   10315 client.go:171] duration metric: took 352.511875ms to LocalClient.Create
	I0920 10:57:38.497266   10315 start.go:128] duration metric: took 2.379108625s to createHost
	I0920 10:57:38.497342   10315 start.go:83] releasing machines lock for "old-k8s-version-705000", held for 2.379246708s
	W0920 10:57:38.497403   10315 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:38.508643   10315 out.go:177] * Deleting "old-k8s-version-705000" in qemu2 ...
	W0920 10:57:38.535716   10315 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:38.535741   10315 start.go:729] Will try again in 5 seconds ...
	I0920 10:57:43.537868   10315 start.go:360] acquireMachinesLock for old-k8s-version-705000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:43.538129   10315 start.go:364] duration metric: took 193.041µs to acquireMachinesLock for "old-k8s-version-705000"
	I0920 10:57:43.538198   10315 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-705000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:57:43.538297   10315 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:57:43.548630   10315 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:57:43.576705   10315 start.go:159] libmachine.API.Create for "old-k8s-version-705000" (driver="qemu2")
	I0920 10:57:43.576745   10315 client.go:168] LocalClient.Create starting
	I0920 10:57:43.576835   10315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:57:43.576880   10315 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:43.576892   10315 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:43.576939   10315 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:57:43.576974   10315 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:43.576985   10315 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:43.577415   10315 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:57:43.740349   10315 main.go:141] libmachine: Creating SSH key...
	I0920 10:57:43.827446   10315 main.go:141] libmachine: Creating Disk image...
	I0920 10:57:43.827456   10315 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:57:43.827678   10315 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2
	I0920 10:57:43.837183   10315 main.go:141] libmachine: STDOUT: 
	I0920 10:57:43.837209   10315 main.go:141] libmachine: STDERR: 
	I0920 10:57:43.837280   10315 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2 +20000M
	I0920 10:57:43.845333   10315 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:57:43.845349   10315 main.go:141] libmachine: STDERR: 
	I0920 10:57:43.845364   10315 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2
	I0920 10:57:43.845371   10315 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:57:43.845379   10315 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:43.845424   10315 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:24:d2:0f:c2:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2
	I0920 10:57:43.847028   10315 main.go:141] libmachine: STDOUT: 
	I0920 10:57:43.847043   10315 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:43.847056   10315 client.go:171] duration metric: took 270.307291ms to LocalClient.Create
	I0920 10:57:45.849233   10315 start.go:128] duration metric: took 2.310910958s to createHost
	I0920 10:57:45.849294   10315 start.go:83] releasing machines lock for "old-k8s-version-705000", held for 2.311162833s
	W0920 10:57:45.849423   10315 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-705000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-705000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:45.870923   10315 out.go:201] 
	W0920 10:57:45.874818   10315 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:57:45.874847   10315 out.go:270] * 
	* 
	W0920 10:57:45.875907   10315 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:57:45.890813   10315 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-705000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (38.8555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-705000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-705000 create -f testdata/busybox.yaml: exit status 1 (27.368125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-705000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-705000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (30.32375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-705000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (29.6425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-705000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-705000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-705000 describe deploy/metrics-server -n kube-system: exit status 1 (27.1945ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-705000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-705000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (31.107458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-705000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-705000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.19013425s)

                                                
                                                
-- stdout --
	* [old-k8s-version-705000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-705000" primary control-plane node in "old-k8s-version-705000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-705000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-705000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:57:48.293498   10359 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:57:48.293651   10359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:48.293654   10359 out.go:358] Setting ErrFile to fd 2...
	I0920 10:57:48.293657   10359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:48.293794   10359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:57:48.294811   10359 out.go:352] Setting JSON to false
	I0920 10:57:48.311191   10359 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7039,"bootTime":1726848029,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:57:48.311262   10359 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:57:48.316457   10359 out.go:177] * [old-k8s-version-705000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:57:48.323611   10359 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:57:48.323635   10359 notify.go:220] Checking for updates...
	I0920 10:57:48.330576   10359 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:57:48.333614   10359 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:57:48.336634   10359 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:57:48.339613   10359 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:57:48.342630   10359 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:57:48.344257   10359 config.go:182] Loaded profile config "old-k8s-version-705000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0920 10:57:48.347569   10359 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 10:57:48.350619   10359 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:57:48.352438   10359 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:57:48.359667   10359 start.go:297] selected driver: qemu2
	I0920 10:57:48.359674   10359 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-705000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:57:48.359766   10359 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:57:48.362122   10359 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:57:48.362146   10359 cni.go:84] Creating CNI manager for ""
	I0920 10:57:48.362175   10359 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:57:48.362200   10359 start.go:340] cluster config:
	{Name:old-k8s-version-705000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:57:48.365570   10359 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:48.373649   10359 out.go:177] * Starting "old-k8s-version-705000" primary control-plane node in "old-k8s-version-705000" cluster
	I0920 10:57:48.377443   10359 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:57:48.377460   10359 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:57:48.377467   10359 cache.go:56] Caching tarball of preloaded images
	I0920 10:57:48.377531   10359 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:57:48.377538   10359 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:57:48.377614   10359 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/old-k8s-version-705000/config.json ...
	I0920 10:57:48.378105   10359 start.go:360] acquireMachinesLock for old-k8s-version-705000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:48.378134   10359 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "old-k8s-version-705000"
	I0920 10:57:48.378143   10359 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:57:48.378147   10359 fix.go:54] fixHost starting: 
	I0920 10:57:48.378261   10359 fix.go:112] recreateIfNeeded on old-k8s-version-705000: state=Stopped err=<nil>
	W0920 10:57:48.378269   10359 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:57:48.382650   10359 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-705000" ...
	I0920 10:57:48.390605   10359 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:48.390634   10359 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:24:d2:0f:c2:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2
	I0920 10:57:48.392525   10359 main.go:141] libmachine: STDOUT: 
	I0920 10:57:48.392549   10359 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:48.392581   10359 fix.go:56] duration metric: took 14.434417ms for fixHost
	I0920 10:57:48.392586   10359 start.go:83] releasing machines lock for "old-k8s-version-705000", held for 14.44925ms
	W0920 10:57:48.392592   10359 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:57:48.392623   10359 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:48.392627   10359 start.go:729] Will try again in 5 seconds ...
	I0920 10:57:53.394871   10359 start.go:360] acquireMachinesLock for old-k8s-version-705000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:53.395456   10359 start.go:364] duration metric: took 455.834µs to acquireMachinesLock for "old-k8s-version-705000"
	I0920 10:57:53.395611   10359 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:57:53.395632   10359 fix.go:54] fixHost starting: 
	I0920 10:57:53.396369   10359 fix.go:112] recreateIfNeeded on old-k8s-version-705000: state=Stopped err=<nil>
	W0920 10:57:53.396395   10359 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:57:53.404833   10359 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-705000" ...
	I0920 10:57:53.407791   10359 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:53.408161   10359 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:24:d2:0f:c2:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/old-k8s-version-705000/disk.qcow2
	I0920 10:57:53.417434   10359 main.go:141] libmachine: STDOUT: 
	I0920 10:57:53.417495   10359 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:53.417596   10359 fix.go:56] duration metric: took 21.965584ms for fixHost
	I0920 10:57:53.417617   10359 start.go:83] releasing machines lock for "old-k8s-version-705000", held for 22.13875ms
	W0920 10:57:53.417799   10359 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:53.425625   10359 out.go:201] 
	W0920 10:57:53.429937   10359 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:57:53.430011   10359 out.go:270] * 
	* 
	W0920 10:57:53.432550   10359 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:57:53.441778   10359 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-705000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (66.703375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-705000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (33.037625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-705000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-705000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-705000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.16425ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-705000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-705000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (29.516791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-705000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (29.689125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-705000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-705000 --alsologtostderr -v=1: exit status 83 (43.4045ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-705000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-705000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:57:53.713953   10380 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:57:53.714932   10380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:53.714936   10380 out.go:358] Setting ErrFile to fd 2...
	I0920 10:57:53.714938   10380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:53.715097   10380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:57:53.715300   10380 out.go:352] Setting JSON to false
	I0920 10:57:53.715311   10380 mustload.go:65] Loading cluster: old-k8s-version-705000
	I0920 10:57:53.715542   10380 config.go:182] Loaded profile config "old-k8s-version-705000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0920 10:57:53.720057   10380 out.go:177] * The control-plane node old-k8s-version-705000 host is not running: state=Stopped
	I0920 10:57:53.724256   10380 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-705000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-705000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (29.500916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-705000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (30.063209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.896073459s)

                                                
                                                
-- stdout --
	* [no-preload-918000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-918000" primary control-plane node in "no-preload-918000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-918000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:57:54.036893   10397 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:57:54.037012   10397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:54.037015   10397 out.go:358] Setting ErrFile to fd 2...
	I0920 10:57:54.037017   10397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:57:54.037151   10397 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:57:54.038256   10397 out.go:352] Setting JSON to false
	I0920 10:57:54.055224   10397 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7045,"bootTime":1726848029,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:57:54.055302   10397 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:57:54.059843   10397 out.go:177] * [no-preload-918000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:57:54.065941   10397 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:57:54.066049   10397 notify.go:220] Checking for updates...
	I0920 10:57:54.072932   10397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:57:54.075849   10397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:57:54.078878   10397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:57:54.081884   10397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:57:54.084766   10397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:57:54.088193   10397 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:57:54.088254   10397 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:57:54.088317   10397 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:57:54.091990   10397 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:57:54.098909   10397 start.go:297] selected driver: qemu2
	I0920 10:57:54.098914   10397 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:57:54.098920   10397 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:57:54.101049   10397 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:57:54.104919   10397 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:57:54.108968   10397 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:57:54.108999   10397 cni.go:84] Creating CNI manager for ""
	I0920 10:57:54.109027   10397 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:57:54.109047   10397 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:57:54.109070   10397 start.go:340] cluster config:
	{Name:no-preload-918000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:57:54.112425   10397 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:54.120717   10397 out.go:177] * Starting "no-preload-918000" primary control-plane node in "no-preload-918000" cluster
	I0920 10:57:54.124888   10397 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:57:54.124945   10397 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/no-preload-918000/config.json ...
	I0920 10:57:54.124960   10397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/no-preload-918000/config.json: {Name:mk694c54b3f61b036859d871de3918f5ce650374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:57:54.124964   10397 cache.go:107] acquiring lock: {Name:mk68c05f40ad97233a07e049f52f8b9752387135 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:54.124981   10397 cache.go:107] acquiring lock: {Name:mkb035708a6989d2190ed610d742642ae2250228 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:54.125024   10397 cache.go:115] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0920 10:57:54.125032   10397 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 71.042µs
	I0920 10:57:54.125037   10397 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0920 10:57:54.125042   10397 cache.go:107] acquiring lock: {Name:mk0e9d5140b066e72544d2b157bbe4c7543e64ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:54.125105   10397 cache.go:107] acquiring lock: {Name:mk309c96afa62eee0d6adeb71775f94f1cfb6102 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:54.125117   10397 cache.go:107] acquiring lock: {Name:mkc975c1fa75a29c25702bf069be81d616638a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:54.124965   10397 cache.go:107] acquiring lock: {Name:mkb384cc2e6de12335687e3c6ffce6c6ea5729ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:54.125137   10397 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 10:57:54.125157   10397 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 10:57:54.125183   10397 start.go:360] acquireMachinesLock for no-preload-918000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:57:54.125201   10397 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 10:57:54.125216   10397 start.go:364] duration metric: took 27.584µs to acquireMachinesLock for "no-preload-918000"
	I0920 10:57:54.125271   10397 cache.go:107] acquiring lock: {Name:mk4c96c0d306c27eb157ae9bdaa9d0a915456f84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:54.125271   10397 cache.go:107] acquiring lock: {Name:mk8e345929d2710ba97a9ca24cc0f5d35fe3803c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:57:54.125299   10397 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 10:57:54.125266   10397 start.go:93] Provisioning new machine with config: &{Name:no-preload-918000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:57:54.125335   10397 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:57:54.125371   10397 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 10:57:54.125405   10397 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 10:57:54.125552   10397 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 10:57:54.133852   10397 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:57:54.138191   10397 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 10:57:54.138288   10397 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 10:57:54.138301   10397 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 10:57:54.138337   10397 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 10:57:54.140151   10397 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 10:57:54.140163   10397 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 10:57:54.140234   10397 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 10:57:54.149558   10397 start.go:159] libmachine.API.Create for "no-preload-918000" (driver="qemu2")
	I0920 10:57:54.149574   10397 client.go:168] LocalClient.Create starting
	I0920 10:57:54.149645   10397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:57:54.149674   10397 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:54.149682   10397 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:54.149728   10397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:57:54.149751   10397 main.go:141] libmachine: Decoding PEM data...
	I0920 10:57:54.149765   10397 main.go:141] libmachine: Parsing certificate...
	I0920 10:57:54.150125   10397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:57:54.317178   10397 main.go:141] libmachine: Creating SSH key...
	I0920 10:57:54.412788   10397 main.go:141] libmachine: Creating Disk image...
	I0920 10:57:54.412829   10397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:57:54.413060   10397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2
	I0920 10:57:54.422813   10397 main.go:141] libmachine: STDOUT: 
	I0920 10:57:54.422839   10397 main.go:141] libmachine: STDERR: 
	I0920 10:57:54.422901   10397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2 +20000M
	I0920 10:57:54.431446   10397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:57:54.431465   10397 main.go:141] libmachine: STDERR: 
	I0920 10:57:54.431479   10397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2
	I0920 10:57:54.431482   10397 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:57:54.431495   10397 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:57:54.431527   10397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6d:11:be:a6:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2
	I0920 10:57:54.433325   10397 main.go:141] libmachine: STDOUT: 
	I0920 10:57:54.433345   10397 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:57:54.433364   10397 client.go:171] duration metric: took 283.786541ms to LocalClient.Create
	I0920 10:57:54.547284   10397 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0920 10:57:54.548584   10397 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 10:57:54.554910   10397 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 10:57:54.570488   10397 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 10:57:54.593560   10397 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0920 10:57:54.602332   10397 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 10:57:54.617446   10397 cache.go:162] opening:  /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 10:57:54.767845   10397 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0920 10:57:54.767859   10397 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 642.609792ms
	I0920 10:57:54.767865   10397 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0920 10:57:56.433522   10397 start.go:128] duration metric: took 2.308188166s to createHost
	I0920 10:57:56.433546   10397 start.go:83] releasing machines lock for "no-preload-918000", held for 2.308324167s
	W0920 10:57:56.433574   10397 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:56.448104   10397 out.go:177] * Deleting "no-preload-918000" in qemu2 ...
	W0920 10:57:56.466109   10397 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:57:56.466119   10397 start.go:729] Will try again in 5 seconds ...
	I0920 10:57:57.627409   10397 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0920 10:57:57.627452   10397 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.502235s
	I0920 10:57:57.627469   10397 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0920 10:57:57.899925   10397 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0920 10:57:57.899955   10397 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 3.775018791s
	I0920 10:57:57.899970   10397 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0920 10:57:58.190199   10397 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0920 10:57:58.190253   10397 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.065209334s
	I0920 10:57:58.190277   10397 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0920 10:57:58.227095   10397 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0920 10:57:58.227164   10397 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.10214075s
	I0920 10:57:58.227231   10397 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0920 10:57:58.343544   10397 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0920 10:57:58.343589   10397 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.21862975s
	I0920 10:57:58.343604   10397 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0920 10:58:00.298560   10397 cache.go:157] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0920 10:58:00.298608   10397 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 6.173535292s
	I0920 10:58:00.298629   10397 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0920 10:58:00.298655   10397 cache.go:87] Successfully saved all images to host disk.
	I0920 10:58:01.468320   10397 start.go:360] acquireMachinesLock for no-preload-918000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:01.469025   10397 start.go:364] duration metric: took 612.709µs to acquireMachinesLock for "no-preload-918000"
	I0920 10:58:01.469167   10397 start.go:93] Provisioning new machine with config: &{Name:no-preload-918000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:58:01.469433   10397 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:58:01.480055   10397 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:58:01.526664   10397 start.go:159] libmachine.API.Create for "no-preload-918000" (driver="qemu2")
	I0920 10:58:01.526721   10397 client.go:168] LocalClient.Create starting
	I0920 10:58:01.526837   10397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:58:01.526901   10397 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:01.526920   10397 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:01.527006   10397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:58:01.527050   10397 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:01.527069   10397 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:01.527599   10397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:58:01.699704   10397 main.go:141] libmachine: Creating SSH key...
	I0920 10:58:01.845624   10397 main.go:141] libmachine: Creating Disk image...
	I0920 10:58:01.845632   10397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:58:01.845839   10397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2
	I0920 10:58:01.855519   10397 main.go:141] libmachine: STDOUT: 
	I0920 10:58:01.855541   10397 main.go:141] libmachine: STDERR: 
	I0920 10:58:01.855620   10397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2 +20000M
	I0920 10:58:01.863539   10397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:58:01.863555   10397 main.go:141] libmachine: STDERR: 
	I0920 10:58:01.863571   10397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2
	I0920 10:58:01.863578   10397 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:58:01.863587   10397 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:01.863625   10397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a8:4a:f1:a2:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2
	I0920 10:58:01.865337   10397 main.go:141] libmachine: STDOUT: 
	I0920 10:58:01.865349   10397 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:01.865366   10397 client.go:171] duration metric: took 338.639834ms to LocalClient.Create
	I0920 10:58:03.867221   10397 start.go:128] duration metric: took 2.39772625s to createHost
	I0920 10:58:03.867312   10397 start.go:83] releasing machines lock for "no-preload-918000", held for 2.398279083s
	W0920 10:58:03.867620   10397 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-918000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-918000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:03.880415   10397 out.go:201] 
	W0920 10:58:03.884367   10397 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:03.884388   10397 out.go:270] * 
	* 
	W0920 10:58:03.891817   10397 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:58:03.896351   10397 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (50.92075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-918000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-918000 create -f testdata/busybox.yaml: exit status 1 (28.926125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-918000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (29.630041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (30.256875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-918000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-918000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-918000 describe deploy/metrics-server -n kube-system: exit status 1 (27.770959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-918000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (30.267625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.181357333s)

                                                
                                                
-- stdout --
	* [no-preload-918000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-918000" primary control-plane node in "no-preload-918000" cluster
	* Restarting existing qemu2 VM for "no-preload-918000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-918000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:07.909735   10482 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:07.909880   10482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:07.909883   10482 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:07.909886   10482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:07.910021   10482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:07.911071   10482 out.go:352] Setting JSON to false
	I0920 10:58:07.927109   10482 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7058,"bootTime":1726848029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:58:07.927180   10482 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:58:07.931952   10482 out.go:177] * [no-preload-918000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:58:07.937994   10482 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:58:07.938049   10482 notify.go:220] Checking for updates...
	I0920 10:58:07.946008   10482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:58:07.949884   10482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:58:07.952951   10482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:58:07.955966   10482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:58:07.958897   10482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:58:07.962233   10482 config.go:182] Loaded profile config "no-preload-918000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:07.962510   10482 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:58:07.965989   10482 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:58:07.973003   10482 start.go:297] selected driver: qemu2
	I0920 10:58:07.973008   10482 start.go:901] validating driver "qemu2" against &{Name:no-preload-918000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:07.973072   10482 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:58:07.975350   10482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:58:07.975377   10482 cni.go:84] Creating CNI manager for ""
	I0920 10:58:07.975400   10482 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:58:07.975423   10482 start.go:340] cluster config:
	{Name:no-preload-918000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-918000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:07.978741   10482 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:07.985905   10482 out.go:177] * Starting "no-preload-918000" primary control-plane node in "no-preload-918000" cluster
	I0920 10:58:07.989950   10482 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:58:07.990039   10482 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/no-preload-918000/config.json ...
	I0920 10:58:07.990096   10482 cache.go:107] acquiring lock: {Name:mk68c05f40ad97233a07e049f52f8b9752387135 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:07.990098   10482 cache.go:107] acquiring lock: {Name:mkb035708a6989d2190ed610d742642ae2250228 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:07.990098   10482 cache.go:107] acquiring lock: {Name:mkb384cc2e6de12335687e3c6ffce6c6ea5729ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:07.990146   10482 cache.go:115] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0920 10:58:07.990153   10482 cache.go:115] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0920 10:58:07.990159   10482 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 69.5µs
	I0920 10:58:07.990168   10482 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0920 10:58:07.990160   10482 cache.go:107] acquiring lock: {Name:mk309c96afa62eee0d6adeb71775f94f1cfb6102 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:07.990174   10482 cache.go:107] acquiring lock: {Name:mk8e345929d2710ba97a9ca24cc0f5d35fe3803c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:07.990153   10482 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 66.875µs
	I0920 10:58:07.990189   10482 cache.go:115] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0920 10:58:07.990286   10482 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0920 10:58:07.990292   10482 cache.go:107] acquiring lock: {Name:mkc975c1fa75a29c25702bf069be81d616638a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:07.990292   10482 cache.go:107] acquiring lock: {Name:mk0e9d5140b066e72544d2b157bbe4c7543e64ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:07.990210   10482 cache.go:115] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0920 10:58:07.990316   10482 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 142.542µs
	I0920 10:58:07.990258   10482 cache.go:107] acquiring lock: {Name:mk4c96c0d306c27eb157ae9bdaa9d0a915456f84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:07.990295   10482 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 205.458µs
	I0920 10:58:07.990386   10482 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0920 10:58:07.990205   10482 cache.go:115] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0920 10:58:07.990396   10482 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 237.041µs
	I0920 10:58:07.990341   10482 cache.go:115] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0920 10:58:07.990404   10482 cache.go:115] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0920 10:58:07.990409   10482 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 141.375µs
	I0920 10:58:07.990414   10482 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0920 10:58:07.990410   10482 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 169.958µs
	I0920 10:58:07.990419   10482 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0920 10:58:07.990369   10482 cache.go:115] /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0920 10:58:07.990423   10482 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 150.416µs
	I0920 10:58:07.990425   10482 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0920 10:58:07.990406   10482 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0920 10:58:07.990350   10482 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0920 10:58:07.990430   10482 cache.go:87] Successfully saved all images to host disk.
	I0920 10:58:07.990445   10482 start.go:360] acquireMachinesLock for no-preload-918000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:07.990474   10482 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "no-preload-918000"
	I0920 10:58:07.990484   10482 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:58:07.990488   10482 fix.go:54] fixHost starting: 
	I0920 10:58:07.990596   10482 fix.go:112] recreateIfNeeded on no-preload-918000: state=Stopped err=<nil>
	W0920 10:58:07.990605   10482 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:58:07.998998   10482 out.go:177] * Restarting existing qemu2 VM for "no-preload-918000" ...
	I0920 10:58:08.002995   10482 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:08.003025   10482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a8:4a:f1:a2:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2
	I0920 10:58:08.004838   10482 main.go:141] libmachine: STDOUT: 
	I0920 10:58:08.004855   10482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:08.004877   10482 fix.go:56] duration metric: took 14.388125ms for fixHost
	I0920 10:58:08.004881   10482 start.go:83] releasing machines lock for "no-preload-918000", held for 14.403166ms
	W0920 10:58:08.004886   10482 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:08.004919   10482 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:08.004923   10482 start.go:729] Will try again in 5 seconds ...
	I0920 10:58:13.007092   10482 start.go:360] acquireMachinesLock for no-preload-918000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:13.007559   10482 start.go:364] duration metric: took 378.25µs to acquireMachinesLock for "no-preload-918000"
	I0920 10:58:13.007703   10482 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:58:13.007721   10482 fix.go:54] fixHost starting: 
	I0920 10:58:13.008563   10482 fix.go:112] recreateIfNeeded on no-preload-918000: state=Stopped err=<nil>
	W0920 10:58:13.008594   10482 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:58:13.013116   10482 out.go:177] * Restarting existing qemu2 VM for "no-preload-918000" ...
	I0920 10:58:13.021081   10482 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:13.021290   10482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a8:4a:f1:a2:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/no-preload-918000/disk.qcow2
	I0920 10:58:13.031035   10482 main.go:141] libmachine: STDOUT: 
	I0920 10:58:13.031097   10482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:13.031186   10482 fix.go:56] duration metric: took 23.463708ms for fixHost
	I0920 10:58:13.031201   10482 start.go:83] releasing machines lock for "no-preload-918000", held for 23.621333ms
	W0920 10:58:13.031369   10482 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-918000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-918000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:13.038990   10482 out.go:201] 
	W0920 10:58:13.042069   10482 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:13.042088   10482 out.go:270] * 
	* 
	W0920 10:58:13.044524   10482 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:58:13.054034   10482 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-918000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (66.011958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-918000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (31.438792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-918000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.649125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (29.665292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-918000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (30.353667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-918000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-918000 --alsologtostderr -v=1: exit status 83 (43.090875ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-918000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-918000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:13.320056   10501 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:13.320221   10501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:13.320224   10501 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:13.320227   10501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:13.320388   10501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:13.320620   10501 out.go:352] Setting JSON to false
	I0920 10:58:13.320633   10501 mustload.go:65] Loading cluster: no-preload-918000
	I0920 10:58:13.320878   10501 config.go:182] Loaded profile config "no-preload-918000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:13.324736   10501 out.go:177] * The control-plane node no-preload-918000 host is not running: state=Stopped
	I0920 10:58:13.328789   10501 out.go:177]   To start a cluster, run: "minikube start -p no-preload-918000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-918000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (29.440208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (29.806417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.929496542s)

                                                
                                                
-- stdout --
	* [embed-certs-391000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-391000" primary control-plane node in "embed-certs-391000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-391000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:13.641313   10518 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:13.641455   10518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:13.641458   10518 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:13.641461   10518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:13.641600   10518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:13.642669   10518 out.go:352] Setting JSON to false
	I0920 10:58:13.658772   10518 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7064,"bootTime":1726848029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:58:13.658833   10518 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:58:13.663459   10518 out.go:177] * [embed-certs-391000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:58:13.669324   10518 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:58:13.669403   10518 notify.go:220] Checking for updates...
	I0920 10:58:13.676403   10518 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:58:13.679351   10518 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:58:13.682367   10518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:58:13.685399   10518 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:58:13.688382   10518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:58:13.691753   10518 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:13.691810   10518 config.go:182] Loaded profile config "stopped-upgrade-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0920 10:58:13.691855   10518 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:58:13.696378   10518 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:58:13.703304   10518 start.go:297] selected driver: qemu2
	I0920 10:58:13.703310   10518 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:58:13.703315   10518 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:58:13.705460   10518 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:58:13.708343   10518 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:58:13.709586   10518 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:58:13.709603   10518 cni.go:84] Creating CNI manager for ""
	I0920 10:58:13.709621   10518 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:58:13.709633   10518 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:58:13.709656   10518 start.go:340] cluster config:
	{Name:embed-certs-391000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:13.713002   10518 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:13.720324   10518 out.go:177] * Starting "embed-certs-391000" primary control-plane node in "embed-certs-391000" cluster
	I0920 10:58:13.724332   10518 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:58:13.724347   10518 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:58:13.724364   10518 cache.go:56] Caching tarball of preloaded images
	I0920 10:58:13.724427   10518 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:58:13.724432   10518 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:58:13.724521   10518 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/embed-certs-391000/config.json ...
	I0920 10:58:13.724541   10518 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/embed-certs-391000/config.json: {Name:mk19f9f6df32eb2adb8b04776c9ae38d843b58a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:58:13.724830   10518 start.go:360] acquireMachinesLock for embed-certs-391000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:13.724860   10518 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "embed-certs-391000"
	I0920 10:58:13.724874   10518 start.go:93] Provisioning new machine with config: &{Name:embed-certs-391000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:58:13.724896   10518 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:58:13.733257   10518 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:58:13.748426   10518 start.go:159] libmachine.API.Create for "embed-certs-391000" (driver="qemu2")
	I0920 10:58:13.748448   10518 client.go:168] LocalClient.Create starting
	I0920 10:58:13.748516   10518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:58:13.748545   10518 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:13.748554   10518 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:13.748590   10518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:58:13.748613   10518 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:13.748621   10518 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:13.748965   10518 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:58:13.913406   10518 main.go:141] libmachine: Creating SSH key...
	I0920 10:58:14.104004   10518 main.go:141] libmachine: Creating Disk image...
	I0920 10:58:14.104012   10518 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:58:14.104236   10518 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2
	I0920 10:58:14.114017   10518 main.go:141] libmachine: STDOUT: 
	I0920 10:58:14.114038   10518 main.go:141] libmachine: STDERR: 
	I0920 10:58:14.114099   10518 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2 +20000M
	I0920 10:58:14.122266   10518 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:58:14.122281   10518 main.go:141] libmachine: STDERR: 
	I0920 10:58:14.122307   10518 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2
	I0920 10:58:14.122313   10518 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:58:14.122325   10518 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:14.122356   10518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:8a:8f:99:b1:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2
	I0920 10:58:14.124098   10518 main.go:141] libmachine: STDOUT: 
	I0920 10:58:14.124114   10518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:14.124131   10518 client.go:171] duration metric: took 375.681542ms to LocalClient.Create
	I0920 10:58:16.126320   10518 start.go:128] duration metric: took 2.401414625s to createHost
	I0920 10:58:16.126397   10518 start.go:83] releasing machines lock for "embed-certs-391000", held for 2.401541625s
	W0920 10:58:16.126513   10518 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:16.143415   10518 out.go:177] * Deleting "embed-certs-391000" in qemu2 ...
	W0920 10:58:16.169050   10518 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:16.169072   10518 start.go:729] Will try again in 5 seconds ...
	I0920 10:58:21.171235   10518 start.go:360] acquireMachinesLock for embed-certs-391000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:21.171880   10518 start.go:364] duration metric: took 489.459µs to acquireMachinesLock for "embed-certs-391000"
	I0920 10:58:21.172056   10518 start.go:93] Provisioning new machine with config: &{Name:embed-certs-391000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:58:21.172356   10518 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:58:21.189942   10518 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:58:21.241831   10518 start.go:159] libmachine.API.Create for "embed-certs-391000" (driver="qemu2")
	I0920 10:58:21.241899   10518 client.go:168] LocalClient.Create starting
	I0920 10:58:21.242020   10518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:58:21.242081   10518 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:21.242098   10518 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:21.242158   10518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:58:21.242202   10518 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:21.242216   10518 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:21.242724   10518 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:58:21.415884   10518 main.go:141] libmachine: Creating SSH key...
	I0920 10:58:21.461432   10518 main.go:141] libmachine: Creating Disk image...
	I0920 10:58:21.461438   10518 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:58:21.461648   10518 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2
	I0920 10:58:21.470725   10518 main.go:141] libmachine: STDOUT: 
	I0920 10:58:21.470743   10518 main.go:141] libmachine: STDERR: 
	I0920 10:58:21.470805   10518 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2 +20000M
	I0920 10:58:21.478597   10518 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:58:21.478639   10518 main.go:141] libmachine: STDERR: 
	I0920 10:58:21.478656   10518 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2
	I0920 10:58:21.478662   10518 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:58:21.478671   10518 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:21.478699   10518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:33:fb:7b:c8:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2
	I0920 10:58:21.480296   10518 main.go:141] libmachine: STDOUT: 
	I0920 10:58:21.480309   10518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:21.480333   10518 client.go:171] duration metric: took 238.431084ms to LocalClient.Create
	I0920 10:58:23.482537   10518 start.go:128] duration metric: took 2.310157583s to createHost
	I0920 10:58:23.482617   10518 start.go:83] releasing machines lock for "embed-certs-391000", held for 2.310689834s
	W0920 10:58:23.483035   10518 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-391000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-391000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:23.504796   10518 out.go:201] 
	W0920 10:58:23.508865   10518 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:23.508893   10518 out.go:270] * 
	* 
	W0920 10:58:23.511491   10518 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:58:23.527742   10518 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (64.240791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-959000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-959000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.888693625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-959000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-959000" primary control-plane node in "default-k8s-diff-port-959000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-959000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:15.158824   10538 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:15.158945   10538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:15.158948   10538 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:15.158953   10538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:15.159088   10538 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:15.160145   10538 out.go:352] Setting JSON to false
	I0920 10:58:15.176008   10538 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7066,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:58:15.176072   10538 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:58:15.181165   10538 out.go:177] * [default-k8s-diff-port-959000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:58:15.188065   10538 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:58:15.188132   10538 notify.go:220] Checking for updates...
	I0920 10:58:15.193976   10538 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:58:15.197063   10538 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:58:15.200137   10538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:58:15.202966   10538 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:58:15.206065   10538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:58:15.209411   10538 config.go:182] Loaded profile config "embed-certs-391000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:15.209472   10538 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:15.209526   10538 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:58:15.213022   10538 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:58:15.220046   10538 start.go:297] selected driver: qemu2
	I0920 10:58:15.220052   10538 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:58:15.220057   10538 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:58:15.222236   10538 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:58:15.223589   10538 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:58:15.226126   10538 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:58:15.226151   10538 cni.go:84] Creating CNI manager for ""
	I0920 10:58:15.226177   10538 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:58:15.226182   10538 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:58:15.226208   10538 start.go:340] cluster config:
	{Name:default-k8s-diff-port-959000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:15.229685   10538 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:15.236964   10538 out.go:177] * Starting "default-k8s-diff-port-959000" primary control-plane node in "default-k8s-diff-port-959000" cluster
	I0920 10:58:15.241017   10538 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:58:15.241034   10538 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:58:15.241045   10538 cache.go:56] Caching tarball of preloaded images
	I0920 10:58:15.241114   10538 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:58:15.241120   10538 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:58:15.241188   10538 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/default-k8s-diff-port-959000/config.json ...
	I0920 10:58:15.241199   10538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/default-k8s-diff-port-959000/config.json: {Name:mk5050fba8758ebfa0febafac587e2581e50ccbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:58:15.241409   10538 start.go:360] acquireMachinesLock for default-k8s-diff-port-959000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:16.126593   10538 start.go:364] duration metric: took 885.163458ms to acquireMachinesLock for "default-k8s-diff-port-959000"
	I0920 10:58:16.126724   10538 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:58:16.126895   10538 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:58:16.136403   10538 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:58:16.185141   10538 start.go:159] libmachine.API.Create for "default-k8s-diff-port-959000" (driver="qemu2")
	I0920 10:58:16.185201   10538 client.go:168] LocalClient.Create starting
	I0920 10:58:16.185346   10538 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:58:16.185408   10538 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:16.185424   10538 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:16.185503   10538 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:58:16.185554   10538 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:16.185567   10538 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:16.186201   10538 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:58:16.360856   10538 main.go:141] libmachine: Creating SSH key...
	I0920 10:58:16.473203   10538 main.go:141] libmachine: Creating Disk image...
	I0920 10:58:16.473210   10538 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:58:16.473414   10538 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2
	I0920 10:58:16.482515   10538 main.go:141] libmachine: STDOUT: 
	I0920 10:58:16.482532   10538 main.go:141] libmachine: STDERR: 
	I0920 10:58:16.482586   10538 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2 +20000M
	I0920 10:58:16.490366   10538 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:58:16.490381   10538 main.go:141] libmachine: STDERR: 
	I0920 10:58:16.490399   10538 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2
	I0920 10:58:16.490404   10538 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:58:16.490418   10538 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:16.490443   10538 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:ce:f1:0e:fe:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2
	I0920 10:58:16.492066   10538 main.go:141] libmachine: STDOUT: 
	I0920 10:58:16.492079   10538 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:16.492098   10538 client.go:171] duration metric: took 306.891708ms to LocalClient.Create
	I0920 10:58:18.494311   10538 start.go:128] duration metric: took 2.367399958s to createHost
	I0920 10:58:18.494368   10538 start.go:83] releasing machines lock for "default-k8s-diff-port-959000", held for 2.36775725s
	W0920 10:58:18.494418   10538 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:18.511623   10538 out.go:177] * Deleting "default-k8s-diff-port-959000" in qemu2 ...
	W0920 10:58:18.555907   10538 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:18.555934   10538 start.go:729] Will try again in 5 seconds ...
	I0920 10:58:23.556354   10538 start.go:360] acquireMachinesLock for default-k8s-diff-port-959000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:23.556753   10538 start.go:364] duration metric: took 286.542µs to acquireMachinesLock for "default-k8s-diff-port-959000"
	I0920 10:58:23.556820   10538 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:58:23.557009   10538 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:58:23.563262   10538 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:58:23.597574   10538 start.go:159] libmachine.API.Create for "default-k8s-diff-port-959000" (driver="qemu2")
	I0920 10:58:23.597620   10538 client.go:168] LocalClient.Create starting
	I0920 10:58:23.597725   10538 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:58:23.597791   10538 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:23.597808   10538 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:23.597861   10538 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:58:23.597904   10538 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:23.597917   10538 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:23.598421   10538 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:58:23.841244   10538 main.go:141] libmachine: Creating SSH key...
	I0920 10:58:23.961308   10538 main.go:141] libmachine: Creating Disk image...
	I0920 10:58:23.961316   10538 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:58:23.961505   10538 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2
	I0920 10:58:23.970724   10538 main.go:141] libmachine: STDOUT: 
	I0920 10:58:23.970747   10538 main.go:141] libmachine: STDERR: 
	I0920 10:58:23.970808   10538 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2 +20000M
	I0920 10:58:23.978591   10538 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:58:23.978608   10538 main.go:141] libmachine: STDERR: 
	I0920 10:58:23.978625   10538 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2
	I0920 10:58:23.978632   10538 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:58:23.978643   10538 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:23.978671   10538 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:bd:db:d8:4f:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2
	I0920 10:58:23.980282   10538 main.go:141] libmachine: STDOUT: 
	I0920 10:58:23.980295   10538 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:23.980306   10538 client.go:171] duration metric: took 382.682625ms to LocalClient.Create
	I0920 10:58:25.982203   10538 start.go:128] duration metric: took 2.425172208s to createHost
	I0920 10:58:25.982281   10538 start.go:83] releasing machines lock for "default-k8s-diff-port-959000", held for 2.425526041s
	W0920 10:58:25.982719   10538 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:25.991304   10538 out.go:201] 
	W0920 10:58:25.994468   10538 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:25.994509   10538 out.go:270] * 
	* 
	W0920 10:58:25.997267   10538 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:58:26.006428   10538 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-959000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (64.063541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-959000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-391000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-391000 create -f testdata/busybox.yaml: exit status 1 (32.370541ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-391000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-391000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (33.187459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (31.2685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-391000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-391000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-391000 describe deploy/metrics-server -n kube-system: exit status 1 (29.460459ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-391000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-391000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (31.281084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-959000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-959000 create -f testdata/busybox.yaml: exit status 1 (30.158ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-959000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-959000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (29.520083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-959000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (28.874542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-959000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-959000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-959000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-959000 describe deploy/metrics-server -n kube-system: exit status 1 (26.799333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-959000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-959000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (29.4685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-959000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.187187292s)

                                                
                                                
-- stdout --
	* [embed-certs-391000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-391000" primary control-plane node in "embed-certs-391000" cluster
	* Restarting existing qemu2 VM for "embed-certs-391000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-391000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:26.737324   10609 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:26.737492   10609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:26.737495   10609 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:26.737498   10609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:26.737643   10609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:26.738551   10609 out.go:352] Setting JSON to false
	I0920 10:58:26.754476   10609 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7077,"bootTime":1726848029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:58:26.754548   10609 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:58:26.758024   10609 out.go:177] * [embed-certs-391000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:58:26.765128   10609 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:58:26.765184   10609 notify.go:220] Checking for updates...
	I0920 10:58:26.773062   10609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:58:26.775893   10609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:58:26.779031   10609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:58:26.782107   10609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:58:26.783613   10609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:58:26.787289   10609 config.go:182] Loaded profile config "embed-certs-391000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:26.787579   10609 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:58:26.791112   10609 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:58:26.796962   10609 start.go:297] selected driver: qemu2
	I0920 10:58:26.796968   10609 start.go:901] validating driver "qemu2" against &{Name:embed-certs-391000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:26.797023   10609 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:58:26.799396   10609 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:58:26.799428   10609 cni.go:84] Creating CNI manager for ""
	I0920 10:58:26.799448   10609 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:58:26.799482   10609 start.go:340] cluster config:
	{Name:embed-certs-391000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-391000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:26.802882   10609 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:26.810930   10609 out.go:177] * Starting "embed-certs-391000" primary control-plane node in "embed-certs-391000" cluster
	I0920 10:58:26.815043   10609 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:58:26.815057   10609 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:58:26.815061   10609 cache.go:56] Caching tarball of preloaded images
	I0920 10:58:26.815116   10609 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:58:26.815122   10609 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:58:26.815183   10609 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/embed-certs-391000/config.json ...
	I0920 10:58:26.815656   10609 start.go:360] acquireMachinesLock for embed-certs-391000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:26.815684   10609 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "embed-certs-391000"
	I0920 10:58:26.815694   10609 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:58:26.815698   10609 fix.go:54] fixHost starting: 
	I0920 10:58:26.815819   10609 fix.go:112] recreateIfNeeded on embed-certs-391000: state=Stopped err=<nil>
	W0920 10:58:26.815829   10609 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:58:26.819067   10609 out.go:177] * Restarting existing qemu2 VM for "embed-certs-391000" ...
	I0920 10:58:26.827082   10609 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:26.827118   10609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:33:fb:7b:c8:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2
	I0920 10:58:26.829040   10609 main.go:141] libmachine: STDOUT: 
	I0920 10:58:26.829059   10609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:26.829089   10609 fix.go:56] duration metric: took 13.388417ms for fixHost
	I0920 10:58:26.829094   10609 start.go:83] releasing machines lock for "embed-certs-391000", held for 13.405625ms
	W0920 10:58:26.829100   10609 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:26.829132   10609 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:26.829137   10609 start.go:729] Will try again in 5 seconds ...
	I0920 10:58:31.831256   10609 start.go:360] acquireMachinesLock for embed-certs-391000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:31.831679   10609 start.go:364] duration metric: took 337.125µs to acquireMachinesLock for "embed-certs-391000"
	I0920 10:58:31.831811   10609 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:58:31.831831   10609 fix.go:54] fixHost starting: 
	I0920 10:58:31.832573   10609 fix.go:112] recreateIfNeeded on embed-certs-391000: state=Stopped err=<nil>
	W0920 10:58:31.832605   10609 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:58:31.842209   10609 out.go:177] * Restarting existing qemu2 VM for "embed-certs-391000" ...
	I0920 10:58:31.846106   10609 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:31.846274   10609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:33:fb:7b:c8:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/embed-certs-391000/disk.qcow2
	I0920 10:58:31.855170   10609 main.go:141] libmachine: STDOUT: 
	I0920 10:58:31.855227   10609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:31.855315   10609 fix.go:56] duration metric: took 23.471542ms for fixHost
	I0920 10:58:31.855333   10609 start.go:83] releasing machines lock for "embed-certs-391000", held for 23.634792ms
	W0920 10:58:31.855496   10609 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-391000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-391000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:31.864048   10609 out.go:201] 
	W0920 10:58:31.868263   10609 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:31.868294   10609 out.go:270] * 
	* 
	W0920 10:58:31.870947   10609 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:58:31.882867   10609 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (68.198542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-959000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-959000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.194685625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-959000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-959000" primary control-plane node in "default-k8s-diff-port-959000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-959000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-959000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:29.858339   10635 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:29.858455   10635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:29.858457   10635 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:29.858460   10635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:29.858590   10635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:29.859624   10635 out.go:352] Setting JSON to false
	I0920 10:58:29.875873   10635 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7080,"bootTime":1726848029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:58:29.875937   10635 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:58:29.881136   10635 out.go:177] * [default-k8s-diff-port-959000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:58:29.889062   10635 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:58:29.889124   10635 notify.go:220] Checking for updates...
	I0920 10:58:29.896038   10635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:58:29.899019   10635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:58:29.902122   10635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:58:29.905103   10635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:58:29.908107   10635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:58:29.911424   10635 config.go:182] Loaded profile config "default-k8s-diff-port-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:29.911716   10635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:58:29.915127   10635 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:58:29.922085   10635 start.go:297] selected driver: qemu2
	I0920 10:58:29.922093   10635 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-959000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:29.922166   10635 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:58:29.924568   10635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 10:58:29.924600   10635 cni.go:84] Creating CNI manager for ""
	I0920 10:58:29.924633   10635 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:58:29.924652   10635 start.go:340] cluster config:
	{Name:default-k8s-diff-port-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-959000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:29.928218   10635 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:29.936028   10635 out.go:177] * Starting "default-k8s-diff-port-959000" primary control-plane node in "default-k8s-diff-port-959000" cluster
	I0920 10:58:29.940141   10635 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:58:29.940157   10635 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:58:29.940166   10635 cache.go:56] Caching tarball of preloaded images
	I0920 10:58:29.940240   10635 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:58:29.940246   10635 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:58:29.940317   10635 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/default-k8s-diff-port-959000/config.json ...
	I0920 10:58:29.940827   10635 start.go:360] acquireMachinesLock for default-k8s-diff-port-959000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:29.940857   10635 start.go:364] duration metric: took 22.833µs to acquireMachinesLock for "default-k8s-diff-port-959000"
	I0920 10:58:29.940867   10635 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:58:29.940872   10635 fix.go:54] fixHost starting: 
	I0920 10:58:29.941005   10635 fix.go:112] recreateIfNeeded on default-k8s-diff-port-959000: state=Stopped err=<nil>
	W0920 10:58:29.941016   10635 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:58:29.945055   10635 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-959000" ...
	I0920 10:58:29.952972   10635 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:29.953003   10635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:bd:db:d8:4f:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2
	I0920 10:58:29.955123   10635 main.go:141] libmachine: STDOUT: 
	I0920 10:58:29.955140   10635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:29.955168   10635 fix.go:56] duration metric: took 14.296583ms for fixHost
	I0920 10:58:29.955173   10635 start.go:83] releasing machines lock for "default-k8s-diff-port-959000", held for 14.31225ms
	W0920 10:58:29.955179   10635 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:29.955214   10635 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:29.955218   10635 start.go:729] Will try again in 5 seconds ...
	I0920 10:58:34.957413   10635 start.go:360] acquireMachinesLock for default-k8s-diff-port-959000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:34.957865   10635 start.go:364] duration metric: took 342.459µs to acquireMachinesLock for "default-k8s-diff-port-959000"
	I0920 10:58:34.957982   10635 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:58:34.958004   10635 fix.go:54] fixHost starting: 
	I0920 10:58:34.958790   10635 fix.go:112] recreateIfNeeded on default-k8s-diff-port-959000: state=Stopped err=<nil>
	W0920 10:58:34.958815   10635 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:58:34.968637   10635 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-959000" ...
	I0920 10:58:34.981885   10635 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:34.982096   10635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:bd:db:d8:4f:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/default-k8s-diff-port-959000/disk.qcow2
	I0920 10:58:34.990448   10635 main.go:141] libmachine: STDOUT: 
	I0920 10:58:34.990521   10635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:34.990628   10635 fix.go:56] duration metric: took 32.621375ms for fixHost
	I0920 10:58:34.990655   10635 start.go:83] releasing machines lock for "default-k8s-diff-port-959000", held for 32.763875ms
	W0920 10:58:34.990843   10635 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-959000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-959000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:34.997725   10635 out.go:201] 
	W0920 10:58:35.000701   10635 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:35.000751   10635 out.go:270] * 
	* 
	W0920 10:58:35.003438   10635 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:58:35.012618   10635 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-959000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (68.623791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-959000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-391000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (32.198625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-391000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-391000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-391000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.844375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-391000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-391000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (29.5115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-391000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (29.708458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-391000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-391000 --alsologtostderr -v=1: exit status 83 (40.757416ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-391000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-391000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:32.154314   10654 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:32.154482   10654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:32.154485   10654 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:32.154488   10654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:32.154626   10654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:32.154837   10654 out.go:352] Setting JSON to false
	I0920 10:58:32.154847   10654 mustload.go:65] Loading cluster: embed-certs-391000
	I0920 10:58:32.155069   10654 config.go:182] Loaded profile config "embed-certs-391000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:32.158553   10654 out.go:177] * The control-plane node embed-certs-391000 host is not running: state=Stopped
	I0920 10:58:32.162298   10654 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-391000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-391000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (29.440333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (29.660834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-331000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-331000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.830007167s)

                                                
                                                
-- stdout --
	* [newest-cni-331000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-331000" primary control-plane node in "newest-cni-331000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-331000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:32.471459   10671 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:32.471578   10671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:32.471581   10671 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:32.471584   10671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:32.471718   10671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:32.472840   10671 out.go:352] Setting JSON to false
	I0920 10:58:32.489091   10671 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7083,"bootTime":1726848029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:58:32.489169   10671 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:58:32.494495   10671 out.go:177] * [newest-cni-331000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:58:32.501385   10671 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:58:32.501433   10671 notify.go:220] Checking for updates...
	I0920 10:58:32.507313   10671 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:58:32.510423   10671 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:58:32.521103   10671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:58:32.524382   10671 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:58:32.527437   10671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:58:32.530665   10671 config.go:182] Loaded profile config "default-k8s-diff-port-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:32.530735   10671 config.go:182] Loaded profile config "multinode-101000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:32.530797   10671 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:58:32.534317   10671 out.go:177] * Using the qemu2 driver based on user configuration
	I0920 10:58:32.541434   10671 start.go:297] selected driver: qemu2
	I0920 10:58:32.541440   10671 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:58:32.541447   10671 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:58:32.543884   10671 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0920 10:58:32.543933   10671 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0920 10:58:32.552315   10671 out.go:177] * Automatically selected the socket_vmnet network
	I0920 10:58:32.555479   10671 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 10:58:32.555507   10671 cni.go:84] Creating CNI manager for ""
	I0920 10:58:32.555535   10671 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:58:32.555541   10671 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:58:32.555577   10671 start.go:340] cluster config:
	{Name:newest-cni-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:32.559811   10671 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:32.568369   10671 out.go:177] * Starting "newest-cni-331000" primary control-plane node in "newest-cni-331000" cluster
	I0920 10:58:32.572228   10671 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:58:32.572247   10671 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:58:32.572254   10671 cache.go:56] Caching tarball of preloaded images
	I0920 10:58:32.572324   10671 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:58:32.572332   10671 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:58:32.572399   10671 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/newest-cni-331000/config.json ...
	I0920 10:58:32.572416   10671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/newest-cni-331000/config.json: {Name:mk042a5492b18150ae51d683beff62cc5cbd553d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:58:32.572648   10671 start.go:360] acquireMachinesLock for newest-cni-331000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:32.572684   10671 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "newest-cni-331000"
	I0920 10:58:32.572698   10671 start.go:93] Provisioning new machine with config: &{Name:newest-cni-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:58:32.572735   10671 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:58:32.577424   10671 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:58:32.595989   10671 start.go:159] libmachine.API.Create for "newest-cni-331000" (driver="qemu2")
	I0920 10:58:32.596024   10671 client.go:168] LocalClient.Create starting
	I0920 10:58:32.596101   10671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:58:32.596133   10671 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:32.596146   10671 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:32.596182   10671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:58:32.596206   10671 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:32.596214   10671 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:32.596575   10671 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:58:32.761707   10671 main.go:141] libmachine: Creating SSH key...
	I0920 10:58:32.823765   10671 main.go:141] libmachine: Creating Disk image...
	I0920 10:58:32.823771   10671 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:58:32.823963   10671 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2
	I0920 10:58:32.833186   10671 main.go:141] libmachine: STDOUT: 
	I0920 10:58:32.833203   10671 main.go:141] libmachine: STDERR: 
	I0920 10:58:32.833268   10671 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2 +20000M
	I0920 10:58:32.841006   10671 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:58:32.841021   10671 main.go:141] libmachine: STDERR: 
	I0920 10:58:32.841038   10671 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2
	I0920 10:58:32.841043   10671 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:58:32.841056   10671 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:32.841090   10671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:0b:33:72:0c:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2
	I0920 10:58:32.842717   10671 main.go:141] libmachine: STDOUT: 
	I0920 10:58:32.842731   10671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:32.842748   10671 client.go:171] duration metric: took 246.718458ms to LocalClient.Create
	I0920 10:58:34.844914   10671 start.go:128] duration metric: took 2.272171042s to createHost
	I0920 10:58:34.844969   10671 start.go:83] releasing machines lock for "newest-cni-331000", held for 2.272289834s
	W0920 10:58:34.845037   10671 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:34.860040   10671 out.go:177] * Deleting "newest-cni-331000" in qemu2 ...
	W0920 10:58:34.898200   10671 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:34.898222   10671 start.go:729] Will try again in 5 seconds ...
	I0920 10:58:39.899532   10671 start.go:360] acquireMachinesLock for newest-cni-331000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:39.900154   10671 start.go:364] duration metric: took 490.709µs to acquireMachinesLock for "newest-cni-331000"
	I0920 10:58:39.900309   10671 start.go:93] Provisioning new machine with config: &{Name:newest-cni-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 10:58:39.900569   10671 start.go:125] createHost starting for "" (driver="qemu2")
	I0920 10:58:39.906295   10671 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 10:58:39.954948   10671 start.go:159] libmachine.API.Create for "newest-cni-331000" (driver="qemu2")
	I0920 10:58:39.954999   10671 client.go:168] LocalClient.Create starting
	I0920 10:58:39.955122   10671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/ca.pem
	I0920 10:58:39.955186   10671 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:39.955204   10671 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:39.955268   10671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19679-6783/.minikube/certs/cert.pem
	I0920 10:58:39.955312   10671 main.go:141] libmachine: Decoding PEM data...
	I0920 10:58:39.955326   10671 main.go:141] libmachine: Parsing certificate...
	I0920 10:58:39.955912   10671 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0920 10:58:40.131164   10671 main.go:141] libmachine: Creating SSH key...
	I0920 10:58:40.205362   10671 main.go:141] libmachine: Creating Disk image...
	I0920 10:58:40.205367   10671 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0920 10:58:40.205565   10671 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2.raw /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2
	I0920 10:58:40.214733   10671 main.go:141] libmachine: STDOUT: 
	I0920 10:58:40.214753   10671 main.go:141] libmachine: STDERR: 
	I0920 10:58:40.214816   10671 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2 +20000M
	I0920 10:58:40.222622   10671 main.go:141] libmachine: STDOUT: Image resized.
	
	I0920 10:58:40.222638   10671 main.go:141] libmachine: STDERR: 
	I0920 10:58:40.222648   10671 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2
	I0920 10:58:40.222652   10671 main.go:141] libmachine: Starting QEMU VM...
	I0920 10:58:40.222668   10671 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:40.222694   10671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:5f:c7:69:49:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2
	I0920 10:58:40.224306   10671 main.go:141] libmachine: STDOUT: 
	I0920 10:58:40.224321   10671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:40.224335   10671 client.go:171] duration metric: took 269.332791ms to LocalClient.Create
	I0920 10:58:42.226506   10671 start.go:128] duration metric: took 2.325905625s to createHost
	I0920 10:58:42.226556   10671 start.go:83] releasing machines lock for "newest-cni-331000", held for 2.3263835s
	W0920 10:58:42.226851   10671 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-331000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-331000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:42.241616   10671 out.go:201] 
	W0920 10:58:42.244658   10671 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:42.244681   10671 out.go:270] * 
	* 
	W0920 10:58:42.247144   10671 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:58:42.262498   10671 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-331000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000: exit status 7 (70.187584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-959000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (32.458917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-959000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-959000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-959000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-959000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.857958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-959000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-959000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (29.513042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-959000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-959000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (29.463ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-959000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-959000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-959000 --alsologtostderr -v=1: exit status 83 (40.864334ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-959000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-959000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:35.281227   10693 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:35.281407   10693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:35.281411   10693 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:35.281413   10693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:35.281565   10693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:35.281781   10693 out.go:352] Setting JSON to false
	I0920 10:58:35.281793   10693 mustload.go:65] Loading cluster: default-k8s-diff-port-959000
	I0920 10:58:35.282030   10693 config.go:182] Loaded profile config "default-k8s-diff-port-959000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:35.286704   10693 out.go:177] * The control-plane node default-k8s-diff-port-959000 host is not running: state=Stopped
	I0920 10:58:35.289645   10693 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-959000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-959000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (29.024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-959000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (29.403458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-959000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-331000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-331000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.185429042s)

                                                
                                                
-- stdout --
	* [newest-cni-331000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-331000" primary control-plane node in "newest-cni-331000" cluster
	* Restarting existing qemu2 VM for "newest-cni-331000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-331000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:44.424432   10734 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:44.424583   10734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:44.424587   10734 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:44.424589   10734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:44.424719   10734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:44.425743   10734 out.go:352] Setting JSON to false
	I0920 10:58:44.442345   10734 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7095,"bootTime":1726848029,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:58:44.442437   10734 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:58:44.447618   10734 out.go:177] * [newest-cni-331000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:58:44.454824   10734 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:58:44.454873   10734 notify.go:220] Checking for updates...
	I0920 10:58:44.461701   10734 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:58:44.464748   10734 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:58:44.467672   10734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:58:44.470755   10734 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:58:44.473799   10734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:58:44.475621   10734 config.go:182] Loaded profile config "newest-cni-331000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:44.475876   10734 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:58:44.479762   10734 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:58:44.486625   10734 start.go:297] selected driver: qemu2
	I0920 10:58:44.486632   10734 start.go:901] validating driver "qemu2" against &{Name:newest-cni-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:44.486707   10734 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:58:44.489076   10734 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 10:58:44.489102   10734 cni.go:84] Creating CNI manager for ""
	I0920 10:58:44.489123   10734 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:58:44.489148   10734 start.go:340] cluster config:
	{Name:newest-cni-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-331000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:58:44.492627   10734 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:58:44.500781   10734 out.go:177] * Starting "newest-cni-331000" primary control-plane node in "newest-cni-331000" cluster
	I0920 10:58:44.504753   10734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:58:44.504772   10734 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:58:44.504780   10734 cache.go:56] Caching tarball of preloaded images
	I0920 10:58:44.504845   10734 preload.go:172] Found /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 10:58:44.504851   10734 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:58:44.504917   10734 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/newest-cni-331000/config.json ...
	I0920 10:58:44.505285   10734 start.go:360] acquireMachinesLock for newest-cni-331000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:44.505313   10734 start.go:364] duration metric: took 22.166µs to acquireMachinesLock for "newest-cni-331000"
	I0920 10:58:44.505322   10734 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:58:44.505327   10734 fix.go:54] fixHost starting: 
	I0920 10:58:44.505452   10734 fix.go:112] recreateIfNeeded on newest-cni-331000: state=Stopped err=<nil>
	W0920 10:58:44.505461   10734 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:58:44.509618   10734 out.go:177] * Restarting existing qemu2 VM for "newest-cni-331000" ...
	I0920 10:58:44.516727   10734 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:44.516785   10734 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:5f:c7:69:49:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2
	I0920 10:58:44.518929   10734 main.go:141] libmachine: STDOUT: 
	I0920 10:58:44.518944   10734 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:44.518975   10734 fix.go:56] duration metric: took 13.646041ms for fixHost
	I0920 10:58:44.518979   10734 start.go:83] releasing machines lock for "newest-cni-331000", held for 13.66225ms
	W0920 10:58:44.518985   10734 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:44.519010   10734 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:44.519014   10734 start.go:729] Will try again in 5 seconds ...
	I0920 10:58:49.521142   10734 start.go:360] acquireMachinesLock for newest-cni-331000: {Name:mk694c915d2a6ee9cc7189b1812414f51e1925d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 10:58:49.521607   10734 start.go:364] duration metric: took 364.541µs to acquireMachinesLock for "newest-cni-331000"
	I0920 10:58:49.521746   10734 start.go:96] Skipping create...Using existing machine configuration
	I0920 10:58:49.521765   10734 fix.go:54] fixHost starting: 
	I0920 10:58:49.522456   10734 fix.go:112] recreateIfNeeded on newest-cni-331000: state=Stopped err=<nil>
	W0920 10:58:49.522482   10734 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 10:58:49.531822   10734 out.go:177] * Restarting existing qemu2 VM for "newest-cni-331000" ...
	I0920 10:58:49.535857   10734 qemu.go:418] Using hvf for hardware acceleration
	I0920 10:58:49.536070   10734 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:5f:c7:69:49:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19679-6783/.minikube/machines/newest-cni-331000/disk.qcow2
	I0920 10:58:49.545823   10734 main.go:141] libmachine: STDOUT: 
	I0920 10:58:49.545891   10734 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0920 10:58:49.546018   10734 fix.go:56] duration metric: took 24.249625ms for fixHost
	I0920 10:58:49.546041   10734 start.go:83] releasing machines lock for "newest-cni-331000", held for 24.401ms
	W0920 10:58:49.546244   10734 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-331000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-331000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0920 10:58:49.553756   10734 out.go:201] 
	W0920 10:58:49.557728   10734 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0920 10:58:49.557753   10734 out.go:270] * 
	* 
	W0920 10:58:49.560830   10734 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:58:49.568821   10734 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-331000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000: exit status 7 (69.39625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-331000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000: exit status 7 (30.930667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-331000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-331000 --alsologtostderr -v=1: exit status 83 (41.373917ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-331000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-331000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:58:49.757753   10748 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:58:49.757914   10748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:49.757917   10748 out.go:358] Setting ErrFile to fd 2...
	I0920 10:58:49.757920   10748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:58:49.758048   10748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:58:49.758262   10748 out.go:352] Setting JSON to false
	I0920 10:58:49.758270   10748 mustload.go:65] Loading cluster: newest-cni-331000
	I0920 10:58:49.758491   10748 config.go:182] Loaded profile config "newest-cni-331000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:58:49.761496   10748 out.go:177] * The control-plane node newest-cni-331000 host is not running: state=Stopped
	I0920 10:58:49.765464   10748 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-331000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-331000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000: exit status 7 (30.831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-331000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000: exit status 7 (30.736667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (79/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.11
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 6.56
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.8
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 9.02
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.73
55 TestFunctional/serial/CacheCmd/cache/add_local 1.69
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.27
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.09
93 TestFunctional/parallel/License 0.25
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.9
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
126 TestFunctional/parallel/ProfileCmd/profile_list 0.08
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.87
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.2
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.04
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.42
258 TestNoKubernetes/serial/Stop 2.08
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
275 TestStartStop/group/old-k8s-version/serial/Stop 2
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
286 TestStartStop/group/no-preload/serial/Stop 3.62
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.1
299 TestStartStop/group/embed-certs/serial/Stop 2.75
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.42
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 1.86
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 10:32:17.581822    7279 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 10:32:17.582181    7279 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-134000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-134000: exit status 85 (105.013916ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |          |
	|         | -p download-only-134000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 10:32:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:32:04.183543    7280 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:32:04.183709    7280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:04.183712    7280 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:04.183715    7280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:04.183863    7280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	W0920 10:32:04.183953    7280 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19679-6783/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19679-6783/.minikube/config/config.json: no such file or directory
	I0920 10:32:04.185170    7280 out.go:352] Setting JSON to true
	I0920 10:32:04.203281    7280 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5495,"bootTime":1726848029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:32:04.203360    7280 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:32:04.207216    7280 out.go:97] [download-only-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:32:04.207353    7280 notify.go:220] Checking for updates...
	W0920 10:32:04.207407    7280 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 10:32:04.212223    7280 out.go:169] MINIKUBE_LOCATION=19679
	I0920 10:32:04.215662    7280 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:32:04.223274    7280 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:32:04.226226    7280 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:32:04.229175    7280 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	W0920 10:32:04.235225    7280 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 10:32:04.235439    7280 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:32:04.236968    7280 out.go:97] Using the qemu2 driver based on user configuration
	I0920 10:32:04.236985    7280 start.go:297] selected driver: qemu2
	I0920 10:32:04.236989    7280 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:32:04.237055    7280 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:32:04.240222    7280 out.go:169] Automatically selected the socket_vmnet network
	I0920 10:32:04.247748    7280 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 10:32:04.247852    7280 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:32:04.247910    7280 cni.go:84] Creating CNI manager for ""
	I0920 10:32:04.247952    7280 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 10:32:04.248026    7280 start.go:340] cluster config:
	{Name:download-only-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:32:04.251735    7280 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:32:04.255167    7280 out.go:97] Downloading VM boot image ...
	I0920 10:32:04.255185    7280 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0920 10:32:10.216843    7280 out.go:97] Starting "download-only-134000" primary control-plane node in "download-only-134000" cluster
	I0920 10:32:10.216868    7280 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:32:10.284434    7280 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:32:10.284444    7280 cache.go:56] Caching tarball of preloaded images
	I0920 10:32:10.285296    7280 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:32:10.289584    7280 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 10:32:10.289593    7280 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:32:10.380185    7280 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 10:32:16.243773    7280 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:32:16.243936    7280 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:32:16.939270    7280 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 10:32:16.939489    7280 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/download-only-134000/config.json ...
	I0920 10:32:16.939507    7280 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/download-only-134000/config.json: {Name:mk71334cad23d68a51beaafabf79bfa6a982dcb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:32:16.939744    7280 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 10:32:16.939934    7280 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0920 10:32:17.531795    7280 out.go:193] 
	W0920 10:32:17.535832    7280 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0 0x10940d6c0] Decompressors:map[bz2:0x14000592880 gz:0x14000592888 tar:0x140005927f0 tar.bz2:0x14000592820 tar.gz:0x14000592850 tar.xz:0x14000592860 tar.zst:0x14000592870 tbz2:0x14000592820 tgz:0x14000592850 txz:0x14000592860 tzst:0x14000592870 xz:0x14000592890 zip:0x140005928a0 zst:0x14000592898] Getters:map[file:0x14001b54570 http:0x1400017b130 https:0x1400017b5e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0920 10:32:17.535860    7280 out_reason.go:110] 
	W0920 10:32:17.544737    7280 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 10:32:17.547692    7280 out.go:193] 
	
	
	* The control-plane node download-only-134000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-134000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-134000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-709000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-709000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (6.56475675s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 10:32:24.512681    7279 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 10:32:24.512735    7279 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-709000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-709000: exit status 85 (80.418375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | -p download-only-134000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| delete  | -p download-only-134000        | download-only-134000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT | 20 Sep 24 10:32 PDT |
	| start   | -o=json --download-only        | download-only-709000 | jenkins | v1.34.0 | 20 Sep 24 10:32 PDT |                     |
	|         | -p download-only-709000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 10:32:17
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 10:32:17.975846    7305 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:32:17.975981    7305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:17.975984    7305 out.go:358] Setting ErrFile to fd 2...
	I0920 10:32:17.975987    7305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:32:17.976148    7305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:32:17.977220    7305 out.go:352] Setting JSON to true
	I0920 10:32:17.993262    7305 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5508,"bootTime":1726848029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:32:17.993331    7305 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:32:17.997915    7305 out.go:97] [download-only-709000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:32:17.998001    7305 notify.go:220] Checking for updates...
	I0920 10:32:18.001775    7305 out.go:169] MINIKUBE_LOCATION=19679
	I0920 10:32:18.004964    7305 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:32:18.009698    7305 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:32:18.012837    7305 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:32:18.015882    7305 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	W0920 10:32:18.021810    7305 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 10:32:18.022010    7305 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:32:18.024820    7305 out.go:97] Using the qemu2 driver based on user configuration
	I0920 10:32:18.024831    7305 start.go:297] selected driver: qemu2
	I0920 10:32:18.024834    7305 start.go:901] validating driver "qemu2" against <nil>
	I0920 10:32:18.024895    7305 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 10:32:18.027825    7305 out.go:169] Automatically selected the socket_vmnet network
	I0920 10:32:18.033044    7305 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0920 10:32:18.033128    7305 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 10:32:18.033149    7305 cni.go:84] Creating CNI manager for ""
	I0920 10:32:18.033173    7305 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 10:32:18.033179    7305 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 10:32:18.033218    7305 start.go:340] cluster config:
	{Name:download-only-709000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-709000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:32:18.037125    7305 iso.go:125] acquiring lock: {Name:mk023dc7780e3bd1da8a266175db9eafb8e7bbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 10:32:18.040864    7305 out.go:97] Starting "download-only-709000" primary control-plane node in "download-only-709000" cluster
	I0920 10:32:18.040875    7305 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:32:18.107518    7305 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:32:18.107531    7305 cache.go:56] Caching tarball of preloaded images
	I0920 10:32:18.108374    7305 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:32:18.112533    7305 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 10:32:18.112541    7305 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:32:18.201207    7305 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 10:32:22.292906    7305 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:32:22.293089    7305 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 10:32:22.814346    7305 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 10:32:22.814534    7305 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/download-only-709000/config.json ...
	I0920 10:32:22.814549    7305 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19679-6783/.minikube/profiles/download-only-709000/config.json: {Name:mk5b2a84786909c787e7f735e50df9e2f8187d59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 10:32:22.814791    7305 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 10:32:22.814916    7305 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19679-6783/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-709000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-709000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-709000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-927000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-927000: exit status 85 (64.201708ms)

                                                
                                                
-- stdout --
	* Profile "addons-927000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-927000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-927000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-927000: exit status 85 (60.416958ms)

                                                
                                                
-- stdout --
	* Profile "addons-927000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-927000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.8s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0920 10:44:11.368764    7279 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 10:44:11.368903    7279 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0920 10:44:13.279583    7279 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0920 10:44:13.279811    7279 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0920 10:44:13.279871    7279 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/001/docker-machine-driver-hyperkit
I0920 10:44:13.796713    7279 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x104d4ed40 0x104d4ed40 0x104d4ed40 0x104d4ed40 0x104d4ed40 0x104d4ed40 0x104d4ed40] Decompressors:map[bz2:0x1400051fc90 gz:0x1400051fc98 tar:0x1400051fc40 tar.bz2:0x1400051fc50 tar.gz:0x1400051fc60 tar.xz:0x1400051fc70 tar.zst:0x1400051fc80 tbz2:0x1400051fc50 tgz:0x1400051fc60 txz:0x1400051fc70 tzst:0x1400051fc80 xz:0x1400051fca0 zip:0x1400051fcb0 zst:0x1400051fca8] Getters:map[file:0x14001429d40 http:0x14000b011d0 https:0x14000b01220] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0920 10:44:13.797600    7279 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/001/docker-machine-driver-hyperkit
I0920 10:44:16.889788    7279 install.go:79] stdout: 
W0920 10:44:16.889946    7279 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0920 10:44:16.889973    7279 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/001/docker-machine-driver-hyperkit]
I0920 10:44:16.903112    7279 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/001/docker-machine-driver-hyperkit]
I0920 10:44:16.914072    7279 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/001/docker-machine-driver-hyperkit]
I0920 10:44:16.922758    7279 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3872575729/001/docker-machine-driver-hyperkit]
I0920 10:44:16.939065    7279 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 10:44:16.939183    7279 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
--- PASS: TestHyperKitDriverInstallOrUpdate (10.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 status: exit status 7 (31.951291ms)

                                                
                                                
-- stdout --
	nospam-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 status: exit status 7 (30.581375ms)

                                                
                                                
-- stdout --
	nospam-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 status: exit status 7 (30.707ms)

                                                
                                                
-- stdout --
	nospam-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 pause: exit status 83 (40.847708ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-559000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-559000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 pause: exit status 83 (39.92725ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-559000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-559000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 pause: exit status 83 (38.7665ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-559000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-559000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 unpause: exit status 83 (40.7345ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-559000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-559000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 unpause: exit status 83 (38.3855ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-559000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-559000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 unpause: exit status 83 (40.130458ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-559000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-559000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.02s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 stop: (3.657349542s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 stop: (2.160537875s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-559000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-559000 stop: (3.199999542s)
--- PASS: TestErrorSpam/stop (9.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19679-6783/.minikube/files/etc/test/nested/copy/7279/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1580660108/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache add minikube-local-cache-test:functional-968000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-968000 cache add minikube-local-cache-test:functional-968000: (1.377618625s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache delete minikube-local-cache-test:functional-968000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-968000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 config get cpus: exit status 14 (32.072292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 config get cpus: exit status 14 (35.1905ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-968000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (158.890875ms)

                                                
                                                
-- stdout --
	* [functional-968000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:34:03.915755    7870 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:34:03.915955    7870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:03.915959    7870 out.go:358] Setting ErrFile to fd 2...
	I0920 10:34:03.915963    7870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:03.916145    7870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:34:03.917576    7870 out.go:352] Setting JSON to false
	I0920 10:34:03.937457    7870 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5614,"bootTime":1726848029,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:34:03.937526    7870 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:34:03.941580    7870 out.go:177] * [functional-968000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0920 10:34:03.949561    7870 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:34:03.949618    7870 notify.go:220] Checking for updates...
	I0920 10:34:03.955460    7870 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:34:03.958517    7870 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:34:03.961450    7870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:34:03.964568    7870 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:34:03.967520    7870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:34:03.969154    7870 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:34:03.969487    7870 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:34:03.973509    7870 out.go:177] * Using the qemu2 driver based on existing profile
	I0920 10:34:03.980383    7870 start.go:297] selected driver: qemu2
	I0920 10:34:03.980390    7870 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:34:03.980450    7870 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:34:03.986500    7870 out.go:201] 
	W0920 10:34:03.990533    7870 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 10:34:03.994493    7870 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-968000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.689834ms)

                                                
                                                
-- stdout --
	* [functional-968000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 10:34:04.141773    7881 out.go:345] Setting OutFile to fd 1 ...
	I0920 10:34:04.141880    7881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:04.141883    7881 out.go:358] Setting ErrFile to fd 2...
	I0920 10:34:04.141885    7881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 10:34:04.142040    7881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19679-6783/.minikube/bin
	I0920 10:34:04.143417    7881 out.go:352] Setting JSON to false
	I0920 10:34:04.160207    7881 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5615,"bootTime":1726848029,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0920 10:34:04.160297    7881 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 10:34:04.165533    7881 out.go:177] * [functional-968000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0920 10:34:04.172527    7881 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 10:34:04.172600    7881 notify.go:220] Checking for updates...
	I0920 10:34:04.180475    7881 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	I0920 10:34:04.183450    7881 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0920 10:34:04.190618    7881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 10:34:04.193574    7881 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	I0920 10:34:04.196509    7881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 10:34:04.199745    7881 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 10:34:04.200039    7881 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 10:34:04.204488    7881 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0920 10:34:04.211515    7881 start.go:297] selected driver: qemu2
	I0920 10:34:04.211522    7881 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 10:34:04.211583    7881 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 10:34:04.217532    7881 out.go:201] 
	W0920 10:34:04.221507    7881 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 10:34:04.225476    7881 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.869827541s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-968000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image rm kicbase/echo-server:functional-968000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-968000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image save --daemon kicbase/echo-server:functional-968000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-968000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
I0920 10:33:28.164260    7279 retry.go:31] will retry after 6.063574688s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:1315: Took "44.228875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "36.459542ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "58.591209ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.80625ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013642917s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-968000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-968000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-968000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-517000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-517000 --output=json --user=testUser: (3.870146625s)
--- PASS: TestJSONOutput/stop/Command (3.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-905000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-905000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.805417ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e10ef89-cc26-4605-8f44-46a2a78bfd26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-905000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce1b505a-f646-4845-83f5-f650d8637d5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"6847a9e3-fce7-4879-b0e0-d897a34235d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig"}}
	{"specversion":"1.0","id":"a69b4421-9433-4c06-b6e9-bdc82f88aa5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a535dffd-e4e7-4f00-98b8-5aab9f2adcc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bbbd2bcf-66f1-4785-9d38-a5e04ec078f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube"}}
	{"specversion":"1.0","id":"dbf0eb66-2eb6-4fb9-a6f8-70d3d5f8c4f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5d5f42c3-04c0-46d9-a5b4-33b0276ffc36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-905000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-905000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.760833ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-040000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19679-6783/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19679-6783/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-040000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-040000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.144ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-040000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-040000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.736155917s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.685201792s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-040000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-040000: (2.08107875s)
--- PASS: TestNoKubernetes/serial/Stop (2.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-040000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-040000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.962042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-040000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-040000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-770000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-705000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-705000 --alsologtostderr -v=3: (2.004538125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-705000 -n old-k8s-version-705000: exit status 7 (47.44475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-705000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-918000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-918000 --alsologtostderr -v=3: (3.615346s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-918000 -n no-preload-918000: exit status 7 (39.940792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-918000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-391000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-391000 --alsologtostderr -v=3: (2.749749708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-959000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-959000 --alsologtostderr -v=3: (3.4168675s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (54.35ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-391000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-959000 -n default-k8s-diff-port-959000: exit status 7 (59.613042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-959000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-331000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-331000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-331000 --alsologtostderr -v=3: (1.860423s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-331000 -n newest-cni-331000: exit status 7 (58.082166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-331000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2350226042/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726853608312394000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2350226042/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726853608312394000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2350226042/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726853608312394000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2350226042/001/test-1726853608312394000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.352ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:28.374192    7279 retry.go:31] will retry after 394.450244ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.361875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:28.859319    7279 retry.go:31] will retry after 575.590506ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.351792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:29.523694    7279 retry.go:31] will retry after 915.300436ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.457333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:30.528895    7279 retry.go:31] will retry after 870.348502ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.130125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:31.486791    7279 retry.go:31] will retry after 1.748940869s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.8595ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:33.326958    7279 retry.go:31] will retry after 4.426206599s: exit status 83
I0920 10:33:34.230014    7279 retry.go:31] will retry after 8.798334639s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.641916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:37.841084    7279 retry.go:31] will retry after 2.921386987s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.857417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo umount -f /mount-9p": exit status 83 (45.897667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2350226042/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4125848440/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.051833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:41.077135    7279 retry.go:31] will retry after 678.634763ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.191792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:41.844320    7279 retry.go:31] will retry after 479.855399ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.220791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:42.410770    7279 retry.go:31] will retry after 1.219062348s: exit status 83
I0920 10:33:43.030567    7279 retry.go:31] will retry after 12.625537958s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.770917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:43.719954    7279 retry.go:31] will retry after 2.48311281s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.263375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:46.287792    7279 retry.go:31] will retry after 2.711149058s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.5865ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:49.086871    7279 retry.go:31] will retry after 3.547571227s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.252417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo umount -f /mount-9p": exit status 83 (46.793375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4125848440/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (10.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup141433874/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup141433874/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup141433874/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (81.492792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:52.970671    7279 retry.go:31] will retry after 414.830097ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (84.228125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:53.472007    7279 retry.go:31] will retry after 821.605144ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (86.797ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:54.382725    7279 retry.go:31] will retry after 1.019399239s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (84.377083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:55.488807    7279 retry.go:31] will retry after 2.148386294s: exit status 83
I0920 10:33:55.658184    7279 retry.go:31] will retry after 17.926823472s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (86.940458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:33:57.726479    7279 retry.go:31] will retry after 3.583831703s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (87.715833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
I0920 10:34:01.400309    7279 retry.go:31] will retry after 1.972899523s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (88.020166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup141433874/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup141433874/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup141433874/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (10.96s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-064000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-064000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-064000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-064000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064000"

                                                
                                                
----------------------- debugLogs end: cilium-064000 [took: 2.230826458s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-064000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-064000
--- SKIP: TestNetworkPlugins/group/cilium (2.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-180000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-180000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard